00:00:00.000 Started by upstream project "autotest-per-patch" build number 126258 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.056 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.075 Using shallow fetch with depth 1 00:00:00.075 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.075 > git --version # timeout=10 00:00:00.098 > git --version # 'git version 2.39.2' 00:00:00.098 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.133 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.133 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.934 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.946 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.959 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.959 > git config core.sparsecheckout # timeout=10 00:00:03.970 > git read-tree -mu HEAD # timeout=10 00:00:03.989 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.010 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.010 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.125 [Pipeline] Start of Pipeline 00:00:04.141 [Pipeline] library 00:00:04.143 Loading library shm_lib@master 00:00:04.143 Library shm_lib@master is cached. Copying from home. 00:00:04.159 [Pipeline] node 00:00:04.172 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.173 [Pipeline] { 00:00:04.184 [Pipeline] catchError 00:00:04.186 [Pipeline] { 00:00:04.197 [Pipeline] wrap 00:00:04.205 [Pipeline] { 00:00:04.213 [Pipeline] stage 00:00:04.214 [Pipeline] { (Prologue) 00:00:04.476 [Pipeline] sh 00:00:04.769 + logger -p user.info -t JENKINS-CI 00:00:04.791 [Pipeline] echo 00:00:04.793 Node: CYP12 00:00:04.804 [Pipeline] sh 00:00:05.110 [Pipeline] setCustomBuildProperty 00:00:05.124 [Pipeline] echo 00:00:05.125 Cleanup processes 00:00:05.131 [Pipeline] sh 00:00:05.417 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.417 730753 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.433 [Pipeline] sh 00:00:05.722 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.722 ++ grep -v 'sudo pgrep' 00:00:05.722 ++ awk '{print $1}' 00:00:05.722 + sudo kill -9 00:00:05.722 + true 00:00:05.740 [Pipeline] cleanWs 00:00:05.751 [WS-CLEANUP] Deleting project workspace... 00:00:05.751 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.759 [WS-CLEANUP] done 00:00:05.763 [Pipeline] setCustomBuildProperty 00:00:05.779 [Pipeline] sh 00:00:06.066 + sudo git config --global --replace-all safe.directory '*' 00:00:06.136 [Pipeline] httpRequest 00:00:06.161 [Pipeline] echo 00:00:06.163 Sorcerer 10.211.164.101 is alive 00:00:06.170 [Pipeline] httpRequest 00:00:06.175 HttpMethod: GET 00:00:06.175 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.176 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.178 Response Code: HTTP/1.1 200 OK 00:00:06.179 Success: Status code 200 is in the accepted range: 200,404 00:00:06.179 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.975 [Pipeline] sh 00:00:07.258 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.275 [Pipeline] httpRequest 00:00:07.304 [Pipeline] echo 00:00:07.305 Sorcerer 10.211.164.101 is alive 00:00:07.315 [Pipeline] httpRequest 00:00:07.321 HttpMethod: GET 00:00:07.322 URL: http://10.211.164.101/packages/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:07.322 Sending request to url: http://10.211.164.101/packages/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:07.343 Response Code: HTTP/1.1 200 OK 00:00:07.344 Success: Status code 200 is in the accepted range: 200,404 00:00:07.345 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:47.978 [Pipeline] sh 00:00:48.268 + tar --no-same-owner -xf spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:50.823 [Pipeline] sh 00:00:51.102 + git -C spdk log --oneline -n5 00:00:51.102 fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:00:51.102 47ca8c1aa nvme: populate socket_id for rdma controllers 00:00:51.102 c1860effd nvme: populate socket_id for tcp controllers 00:00:51.102 91f51bb85 nvme: populate socket_id for pcie controllers 00:00:51.102 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:00:51.114 [Pipeline] } 00:00:51.131 [Pipeline] // stage 00:00:51.138 [Pipeline] stage 00:00:51.140 [Pipeline] { (Prepare) 00:00:51.155 [Pipeline] writeFile 00:00:51.168 [Pipeline] sh 00:00:51.449 + logger -p user.info -t JENKINS-CI 00:00:51.461 [Pipeline] sh 00:00:51.740 + logger -p user.info -t JENKINS-CI 00:00:51.751 [Pipeline] sh 00:00:52.031 + cat autorun-spdk.conf 00:00:52.031 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.031 SPDK_TEST_NVMF=1 00:00:52.031 SPDK_TEST_NVME_CLI=1 00:00:52.031 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.031 SPDK_TEST_NVMF_NICS=e810 00:00:52.031 SPDK_TEST_VFIOUSER=1 00:00:52.031 SPDK_RUN_UBSAN=1 00:00:52.031 NET_TYPE=phy 00:00:52.038 RUN_NIGHTLY=0 00:00:52.042 [Pipeline] readFile 00:00:52.062 [Pipeline] withEnv 00:00:52.064 [Pipeline] { 00:00:52.074 [Pipeline] sh 00:00:52.354 + set -ex 00:00:52.354 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:52.354 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.354 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.354 ++ SPDK_TEST_NVMF=1 00:00:52.354 ++ SPDK_TEST_NVME_CLI=1 00:00:52.354 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.354 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.354 ++ SPDK_TEST_VFIOUSER=1 00:00:52.354 ++ SPDK_RUN_UBSAN=1 00:00:52.354 ++ NET_TYPE=phy 00:00:52.354 ++ RUN_NIGHTLY=0 00:00:52.354 + case $SPDK_TEST_NVMF_NICS in 00:00:52.354 + DRIVERS=ice 00:00:52.354 + [[ tcp == \r\d\m\a ]] 00:00:52.354 + [[ -n ice ]] 00:00:52.354 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:52.354 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:52.354 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:52.354 rmmod: ERROR: Module irdma is not currently loaded 00:00:52.354 rmmod: ERROR: Module i40iw is not currently loaded 00:00:52.354 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:52.354 + true 00:00:52.354 + for D in $DRIVERS 00:00:52.354 + sudo modprobe ice 00:00:52.354 + exit 0 00:00:52.363 [Pipeline] } 00:00:52.376 [Pipeline] // withEnv 00:00:52.381 [Pipeline] } 00:00:52.397 [Pipeline] // stage 00:00:52.405 [Pipeline] catchError 00:00:52.406 [Pipeline] { 00:00:52.420 [Pipeline] timeout 00:00:52.420 Timeout set to expire in 50 min 00:00:52.421 [Pipeline] { 00:00:52.439 [Pipeline] stage 00:00:52.441 [Pipeline] { (Tests) 00:00:52.457 [Pipeline] sh 00:00:52.745 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.745 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.745 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.745 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:52.745 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.745 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.745 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:52.745 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.745 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.745 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.745 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:52.745 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.745 + source /etc/os-release 00:00:52.745 ++ NAME='Fedora Linux' 00:00:52.745 ++ VERSION='38 (Cloud Edition)' 00:00:52.745 ++ ID=fedora 00:00:52.745 ++ VERSION_ID=38 00:00:52.745 ++ VERSION_CODENAME= 00:00:52.745 ++ PLATFORM_ID=platform:f38 00:00:52.745 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:52.745 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:52.745 ++ LOGO=fedora-logo-icon 00:00:52.745 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:52.745 ++ HOME_URL=https://fedoraproject.org/ 00:00:52.745 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:52.745 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:52.745 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:52.745 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:52.745 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:52.745 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:52.745 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:52.745 ++ SUPPORT_END=2024-05-14 00:00:52.745 ++ VARIANT='Cloud Edition' 00:00:52.745 ++ VARIANT_ID=cloud 00:00:52.745 + uname -a 00:00:52.745 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:52.745 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:56.058 Hugepages 00:00:56.058 node hugesize free / total 00:00:56.058 node0 1048576kB 0 / 0 00:00:56.058 node0 2048kB 0 / 0 00:00:56.058 node1 1048576kB 0 / 0 00:00:56.058 node1 2048kB 0 / 0 00:00:56.058 00:00:56.058 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:56.058 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:56.058 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:56.058 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:56.058 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:56.058 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:56.058 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:56.058 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:56.058 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:56.318 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:56.318 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:56.318 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:56.318 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:56.318 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:56.318 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:56.318 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:56.318 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:56.318 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:56.318 + rm -f /tmp/spdk-ld-path 00:00:56.318 + source autorun-spdk.conf 00:00:56.318 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.318 ++ SPDK_TEST_NVMF=1 00:00:56.318 ++ SPDK_TEST_NVME_CLI=1 00:00:56.318 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.318 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.318 ++ SPDK_TEST_VFIOUSER=1 00:00:56.318 ++ SPDK_RUN_UBSAN=1 00:00:56.318 ++ NET_TYPE=phy 00:00:56.318 ++ RUN_NIGHTLY=0 00:00:56.318 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:56.318 + [[ -n '' ]] 00:00:56.318 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.318 + for M in /var/spdk/build-*-manifest.txt 00:00:56.318 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:56.318 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:56.318 + for M in /var/spdk/build-*-manifest.txt 00:00:56.318 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:56.318 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:56.318 ++ uname 00:00:56.318 + [[ Linux == \L\i\n\u\x ]] 00:00:56.318 + sudo dmesg -T 00:00:56.318 + sudo dmesg --clear 00:00:56.318 + dmesg_pid=732328 00:00:56.318 + [[ Fedora Linux == FreeBSD ]] 00:00:56.318 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:56.318 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:56.318 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:56.318 + [[ -x /usr/src/fio-static/fio ]] 00:00:56.318 + export FIO_BIN=/usr/src/fio-static/fio 00:00:56.318 + FIO_BIN=/usr/src/fio-static/fio 00:00:56.318 + sudo dmesg -Tw 00:00:56.318 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:56.318 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:56.318 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:56.318 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:56.318 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:56.318 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:56.318 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:56.318 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:56.318 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.318 Test configuration: 00:00:56.318 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.318 SPDK_TEST_NVMF=1 00:00:56.318 SPDK_TEST_NVME_CLI=1 00:00:56.318 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.318 SPDK_TEST_NVMF_NICS=e810 00:00:56.318 SPDK_TEST_VFIOUSER=1 00:00:56.318 SPDK_RUN_UBSAN=1 00:00:56.318 NET_TYPE=phy 00:00:56.579 RUN_NIGHTLY=0 00:12:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:56.579 00:12:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:56.579 00:12:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:56.579 00:12:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:56.579 00:12:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.579 00:12:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.579 00:12:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.579 00:12:09 -- paths/export.sh@5 -- $ export PATH 00:00:56.579 00:12:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.579 00:12:09 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:56.579 00:12:09 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:56.579 00:12:09 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721081529.XXXXXX 00:00:56.579 00:12:10 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721081529.CfLsyG 00:00:56.579 00:12:10 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:56.579 00:12:10 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:56.579 00:12:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:56.579 00:12:10 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:56.579 00:12:10 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:56.579 00:12:10 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:56.579 00:12:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:56.579 00:12:10 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.579 00:12:10 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:56.579 00:12:10 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:56.579 00:12:10 -- pm/common@17 -- $ local monitor 00:00:56.579 00:12:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.579 00:12:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.579 00:12:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.579 00:12:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.579 00:12:10 -- pm/common@21 -- $ date +%s 00:00:56.579 00:12:10 -- pm/common@21 -- $ date +%s 00:00:56.579 00:12:10 -- pm/common@25 -- $ sleep 1 00:00:56.579 00:12:10 -- pm/common@21 -- $ date +%s 00:00:56.579 00:12:10 -- pm/common@21 -- $ date +%s 00:00:56.579 00:12:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081530 00:00:56.579 00:12:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081530 00:00:56.579 00:12:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081530 00:00:56.579 00:12:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081530 00:00:56.579 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081530_collect-vmstat.pm.log 00:00:56.579 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081530_collect-cpu-load.pm.log 00:00:56.579 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081530_collect-cpu-temp.pm.log 00:00:56.579 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081530_collect-bmc-pm.bmc.pm.log 00:00:57.524 00:12:11 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:57.524 00:12:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:57.524 00:12:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:57.524 00:12:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.524 00:12:11 -- spdk/autobuild.sh@16 -- $ date -u 00:00:57.524 Mon Jul 15 10:12:11 PM UTC 2024 00:00:57.524 00:12:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:57.524 v24.09-pre-234-gfcbf7f00f 00:00:57.524 00:12:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:57.524 00:12:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:57.524 00:12:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:57.524 00:12:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:57.524 00:12:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:57.524 00:12:11 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.524 ************************************ 00:00:57.524 START TEST ubsan 00:00:57.524 ************************************ 00:00:57.524 00:12:11 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:57.524 using ubsan 00:00:57.524 00:00:57.524 real 0m0.001s 00:00:57.524 user 0m0.000s 00:00:57.524 sys 0m0.000s 00:00:57.524 00:12:11 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:57.524 00:12:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:57.524 ************************************ 00:00:57.524 END TEST ubsan 00:00:57.524 ************************************ 00:00:57.524 00:12:11 -- common/autotest_common.sh@1142 -- $ return 0 00:00:57.524 00:12:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:57.524 00:12:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:57.524 00:12:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:57.524 00:12:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:57.524 00:12:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:57.524 00:12:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:57.524 00:12:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:57.524 00:12:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:57.524 00:12:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:57.785 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:57.785 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:58.047 Using 'verbs' RDMA provider 00:01:13.900 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:26.207 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:26.207 Creating mk/config.mk...done. 00:01:26.207 Creating mk/cc.flags.mk...done. 00:01:26.207 Type 'make' to build. 00:01:26.207 00:12:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:26.207 00:12:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.207 00:12:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.207 00:12:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.207 ************************************ 00:01:26.207 START TEST make 00:01:26.207 ************************************ 00:01:26.207 00:12:39 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:26.207 make[1]: Nothing to be done for 'all'. 00:01:27.146 The Meson build system 00:01:27.146 Version: 1.3.1 00:01:27.146 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:27.146 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.146 Build type: native build 00:01:27.146 Project name: libvfio-user 00:01:27.146 Project version: 0.0.1 00:01:27.146 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:27.146 C linker for the host machine: cc ld.bfd 2.39-16 00:01:27.146 Host machine cpu family: x86_64 00:01:27.146 Host machine cpu: x86_64 00:01:27.146 Run-time dependency threads found: YES 00:01:27.146 Library dl found: YES 00:01:27.146 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:27.146 Run-time dependency json-c found: YES 0.17 00:01:27.146 Run-time dependency cmocka found: YES 1.1.7 00:01:27.146 Program pytest-3 found: NO 00:01:27.146 Program flake8 found: NO 00:01:27.146 Program misspell-fixer found: NO 00:01:27.146 Program restructuredtext-lint found: NO 00:01:27.146 Program valgrind found: YES (/usr/bin/valgrind) 00:01:27.146 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:27.146 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:27.146 Compiler for C supports arguments -Wwrite-strings: YES 00:01:27.146 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.146 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:27.146 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:27.146 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.146 Build targets in project: 8 00:01:27.146 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:27.146 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:27.146 00:01:27.146 libvfio-user 0.0.1 00:01:27.146 00:01:27.146 User defined options 00:01:27.146 buildtype : debug 00:01:27.146 default_library: shared 00:01:27.146 libdir : /usr/local/lib 00:01:27.146 00:01:27.146 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:27.405 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.666 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:27.666 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:27.666 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:27.666 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:27.666 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:27.666 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:27.666 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:27.666 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:27.666 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:27.666 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:27.666 [11/37] Compiling C object samples/null.p/null.c.o 00:01:27.666 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:27.666 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:27.666 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:27.666 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:27.666 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:27.666 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:27.666 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:27.666 [19/37] Compiling C object samples/server.p/server.c.o 00:01:27.666 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:27.666 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:27.666 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:27.666 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:27.666 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:27.666 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:27.666 [26/37] Compiling C object samples/client.p/client.c.o 00:01:27.666 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:27.666 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:27.666 [29/37] Linking target samples/client 00:01:27.925 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:27.925 [31/37] Linking target test/unit_tests 00:01:27.925 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:27.925 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:27.925 [34/37] Linking target samples/server 00:01:27.925 [35/37] Linking target samples/gpio-pci-idio-16 00:01:27.925 [36/37] Linking target samples/lspci 00:01:27.925 [37/37] Linking target samples/null 00:01:27.925 INFO: autodetecting backend as ninja 00:01:27.925 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.925 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.494 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.494 ninja: no work to do. 00:01:35.080 The Meson build system 00:01:35.080 Version: 1.3.1 00:01:35.080 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:35.080 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:35.080 Build type: native build 00:01:35.080 Program cat found: YES (/usr/bin/cat) 00:01:35.080 Project name: DPDK 00:01:35.080 Project version: 24.03.0 00:01:35.080 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:35.080 C linker for the host machine: cc ld.bfd 2.39-16 00:01:35.081 Host machine cpu family: x86_64 00:01:35.081 Host machine cpu: x86_64 00:01:35.081 Message: ## Building in Developer Mode ## 00:01:35.081 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.081 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:35.081 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.081 Program python3 found: YES (/usr/bin/python3) 00:01:35.081 Program cat found: YES (/usr/bin/cat) 00:01:35.081 Compiler for C supports arguments -march=native: YES 00:01:35.081 Checking for size of "void *" : 8 00:01:35.081 Checking for size of "void *" : 8 (cached) 00:01:35.081 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:35.081 Library m found: YES 00:01:35.081 Library numa found: YES 00:01:35.081 Has header "numaif.h" : YES 00:01:35.081 Library fdt found: NO 00:01:35.081 Library execinfo found: NO 00:01:35.081 Has header "execinfo.h" : YES 00:01:35.081 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:35.081 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.081 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.081 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.081 Run-time dependency openssl found: YES 3.0.9 00:01:35.081 Run-time dependency libpcap found: YES 1.10.4 00:01:35.081 Has header "pcap.h" with dependency libpcap: YES 00:01:35.081 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.081 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.081 Compiler for C supports arguments -Wformat: YES 00:01:35.081 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:35.081 Compiler for C supports arguments -Wformat-security: NO 00:01:35.081 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.081 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.081 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.081 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.081 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.081 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.081 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.081 Compiler for C supports arguments -Wundef: YES 00:01:35.081 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.081 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.081 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:35.081 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.081 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:35.081 Program objdump found: YES (/usr/bin/objdump) 00:01:35.081 Compiler for C supports arguments -mavx512f: YES 00:01:35.081 Checking if "AVX512 checking" compiles: YES 00:01:35.081 Fetching value of define "__SSE4_2__" : 1 00:01:35.081 Fetching value of define "__AES__" : 1 00:01:35.081 Fetching value of define "__AVX__" : 1 00:01:35.081 Fetching value of define "__AVX2__" : 1 00:01:35.081 Fetching value of define "__AVX512BW__" : 1 00:01:35.081 Fetching value of define "__AVX512CD__" : 1 00:01:35.081 Fetching value of define "__AVX512DQ__" : 1 00:01:35.081 Fetching value of define "__AVX512F__" : 1 00:01:35.081 Fetching value of define "__AVX512VL__" : 1 00:01:35.081 Fetching value of define "__PCLMUL__" : 1 00:01:35.081 Fetching value of define "__RDRND__" : 1 00:01:35.081 Fetching value of define "__RDSEED__" : 1 00:01:35.081 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:35.081 Fetching value of define "__znver1__" : (undefined) 00:01:35.081 Fetching value of define "__znver2__" : (undefined) 00:01:35.081 Fetching value of define "__znver3__" : (undefined) 00:01:35.081 Fetching value of define "__znver4__" : (undefined) 00:01:35.081 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:35.081 Message: lib/log: Defining dependency "log" 00:01:35.081 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.081 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.081 Checking for function "getentropy" : NO 00:01:35.081 Message: lib/eal: Defining dependency "eal" 00:01:35.081 Message: lib/ring: Defining dependency "ring" 00:01:35.081 Message: lib/rcu: Defining dependency "rcu" 00:01:35.081 Message: lib/mempool: Defining dependency "mempool" 00:01:35.081 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.081 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.081 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.081 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.081 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.081 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:35.081 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:35.081 Compiler for C supports arguments -mpclmul: YES 00:01:35.081 Compiler for C supports arguments -maes: YES 00:01:35.081 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.081 Compiler for C supports arguments -mavx512bw: YES 00:01:35.081 Compiler for C supports arguments -mavx512dq: YES 00:01:35.081 Compiler for C supports arguments -mavx512vl: YES 00:01:35.081 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.081 Compiler for C supports arguments -mavx2: YES 00:01:35.081 Compiler for C supports arguments -mavx: YES 00:01:35.081 Message: lib/net: Defining dependency "net" 00:01:35.081 Message: lib/meter: Defining dependency "meter" 00:01:35.081 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.081 Message: lib/pci: Defining dependency "pci" 00:01:35.081 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.081 Message: lib/hash: Defining dependency "hash" 00:01:35.081 Message: lib/timer: Defining dependency "timer" 00:01:35.081 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.081 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.081 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.081 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.081 Message: lib/power: Defining dependency "power" 00:01:35.081 Message: lib/reorder: Defining dependency "reorder" 00:01:35.081 Message: lib/security: Defining dependency "security" 00:01:35.081 Has header "linux/userfaultfd.h" : YES 00:01:35.081 Has header "linux/vduse.h" : YES 00:01:35.081 Message: lib/vhost: Defining dependency "vhost" 00:01:35.081 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:35.081 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.081 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:35.081 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:35.081 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:35.081 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:35.081 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:35.081 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:35.081 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:35.081 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:35.081 Program doxygen found: YES (/usr/bin/doxygen) 00:01:35.081 Configuring doxy-api-html.conf using configuration 00:01:35.081 Configuring doxy-api-man.conf using configuration 00:01:35.081 Program mandb found: YES (/usr/bin/mandb) 00:01:35.081 Program sphinx-build found: NO 00:01:35.081 Configuring rte_build_config.h using configuration 00:01:35.081 Message: 00:01:35.081 ================= 00:01:35.081 Applications Enabled 00:01:35.081 ================= 00:01:35.081 00:01:35.081 apps: 00:01:35.081 00:01:35.081 00:01:35.081 Message: 00:01:35.081 ================= 00:01:35.081 Libraries Enabled 00:01:35.081 ================= 00:01:35.081 00:01:35.081 libs: 00:01:35.081 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:35.081 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:35.081 cryptodev, dmadev, power, reorder, security, vhost, 00:01:35.081 00:01:35.081 Message: 00:01:35.081 =============== 00:01:35.081 Drivers Enabled 00:01:35.081 =============== 00:01:35.081 00:01:35.081 common: 00:01:35.081 00:01:35.081 bus: 00:01:35.081 pci, vdev, 00:01:35.081 mempool: 00:01:35.081 ring, 00:01:35.081 dma: 00:01:35.081 00:01:35.081 net: 00:01:35.081 00:01:35.081 crypto: 00:01:35.081 00:01:35.081 compress: 00:01:35.081 00:01:35.081 vdpa: 00:01:35.081 00:01:35.081 00:01:35.081 Message: 00:01:35.081 ================= 00:01:35.081 Content Skipped 00:01:35.081 ================= 00:01:35.081 00:01:35.081 apps: 00:01:35.081 dumpcap: explicitly disabled via build config 00:01:35.081 graph: explicitly disabled via build config 00:01:35.081 pdump: explicitly disabled via build config 00:01:35.081 proc-info: explicitly disabled via build config 00:01:35.081 test-acl: explicitly disabled via build config 00:01:35.081 test-bbdev: explicitly disabled via build config 00:01:35.081 test-cmdline: explicitly disabled via build config 00:01:35.081 test-compress-perf: explicitly disabled via build config 00:01:35.081 test-crypto-perf: explicitly disabled via build config 00:01:35.081 test-dma-perf: explicitly disabled via build config 00:01:35.081 test-eventdev: explicitly disabled via build config 00:01:35.081 test-fib: explicitly disabled via build config 00:01:35.081 test-flow-perf: explicitly disabled via build config 00:01:35.081 test-gpudev: explicitly disabled via build config 00:01:35.081 test-mldev: explicitly disabled via build config 00:01:35.081 test-pipeline: explicitly disabled via build config 00:01:35.081 test-pmd: explicitly disabled via build config 00:01:35.081 test-regex: explicitly disabled via build config 00:01:35.081 test-sad: explicitly disabled via build config 00:01:35.081 test-security-perf: explicitly disabled via build config 00:01:35.081 00:01:35.081 libs: 00:01:35.081 argparse: explicitly disabled via build config 00:01:35.081 metrics: explicitly disabled via build config 00:01:35.081 acl: explicitly disabled via build config 00:01:35.081 bbdev: explicitly disabled via build config 00:01:35.081 bitratestats: explicitly disabled via build config 00:01:35.081 bpf: explicitly disabled via build config 00:01:35.081 cfgfile: explicitly disabled via build config 00:01:35.081 distributor: explicitly disabled via build config 00:01:35.081 efd: explicitly disabled via build config 00:01:35.081 eventdev: explicitly disabled via build config 00:01:35.081 dispatcher: explicitly disabled via build config 00:01:35.081 gpudev: explicitly disabled via build config 00:01:35.081 gro: explicitly disabled via build config 00:01:35.081 gso: explicitly disabled via build config 00:01:35.081 ip_frag: explicitly disabled via build config 00:01:35.081 jobstats: explicitly disabled via build config 00:01:35.081 latencystats: explicitly disabled via build config 00:01:35.081 lpm: explicitly disabled via build config 00:01:35.081 member: explicitly disabled via build config 00:01:35.081 pcapng: explicitly disabled via build config 00:01:35.081 rawdev: explicitly disabled via build config 00:01:35.081 regexdev: explicitly disabled via build config 00:01:35.081 mldev: explicitly disabled via build config 00:01:35.082 rib: explicitly disabled via build config 00:01:35.082 sched: explicitly disabled via build config 00:01:35.082 stack: explicitly disabled via build config 00:01:35.082 ipsec: explicitly disabled via build config 00:01:35.082 pdcp: explicitly disabled via build config 00:01:35.082 fib: explicitly disabled via build config 00:01:35.082 port: explicitly disabled via build config 00:01:35.082 pdump: explicitly disabled via build config 00:01:35.082 table: explicitly disabled via build config 00:01:35.082 pipeline: explicitly disabled via build config 00:01:35.082 graph: explicitly disabled via build config 00:01:35.082 node: explicitly disabled via build config 00:01:35.082 00:01:35.082 drivers: 00:01:35.082 common/cpt: not in enabled drivers build config 00:01:35.082 common/dpaax: not in enabled drivers build config 00:01:35.082 common/iavf: not in enabled drivers build config 00:01:35.082 common/idpf: not in enabled drivers build config 00:01:35.082 common/ionic: not in enabled drivers build config 00:01:35.082 common/mvep: not in enabled drivers build config 00:01:35.082 common/octeontx: not in enabled drivers build config 00:01:35.082 bus/auxiliary: not in enabled drivers build config 00:01:35.082 bus/cdx: not in enabled drivers build config 00:01:35.082 bus/dpaa: not in enabled drivers build config 00:01:35.082 bus/fslmc: not in enabled drivers build config 00:01:35.082 bus/ifpga: not in enabled drivers build config 00:01:35.082 bus/platform: not in enabled drivers build config 00:01:35.082 bus/uacce: not in enabled drivers build config 00:01:35.082 bus/vmbus: not in enabled drivers build config 00:01:35.082 common/cnxk: not in enabled drivers build config 00:01:35.082 common/mlx5: not in enabled drivers build config 00:01:35.082 common/nfp: not in enabled drivers build config 00:01:35.082 common/nitrox: not in enabled drivers build config 00:01:35.082 common/qat: not in enabled drivers build config 00:01:35.082 common/sfc_efx: not in enabled drivers build config 00:01:35.082 mempool/bucket: not in enabled drivers build config 00:01:35.082 mempool/cnxk: not in enabled drivers build config 00:01:35.082 mempool/dpaa: not in enabled drivers build config 00:01:35.082 mempool/dpaa2: not in enabled drivers build config 00:01:35.082 mempool/octeontx: not in enabled drivers build config 00:01:35.082 mempool/stack: not in enabled drivers build config 00:01:35.082 dma/cnxk: not in enabled drivers build config 00:01:35.082 dma/dpaa: not in enabled drivers build config 00:01:35.082 dma/dpaa2: not in enabled drivers build config 00:01:35.082 dma/hisilicon: not in enabled drivers build config 00:01:35.082 dma/idxd: not in enabled drivers build config 00:01:35.082 dma/ioat: not in enabled drivers build config 00:01:35.082 dma/skeleton: not in enabled drivers build config 00:01:35.082 net/af_packet: not in enabled drivers build config 00:01:35.082 net/af_xdp: not in enabled drivers build config 00:01:35.082 net/ark: not in enabled drivers build config 00:01:35.082 net/atlantic: not in enabled drivers build config 00:01:35.082 net/avp: not in enabled drivers build config 00:01:35.082 net/axgbe: not in enabled drivers build config 00:01:35.082 net/bnx2x: not in enabled drivers build config 00:01:35.082 net/bnxt: not in enabled drivers build config 00:01:35.082 net/bonding: not in enabled drivers build config 00:01:35.082 net/cnxk: not in enabled drivers build config 00:01:35.082 net/cpfl: not in enabled drivers build config 00:01:35.082 net/cxgbe: not in enabled drivers build config 00:01:35.082 net/dpaa: not in enabled drivers build config 00:01:35.082 net/dpaa2: not in enabled drivers build config 00:01:35.082 net/e1000: not in enabled drivers build config 00:01:35.082 net/ena: not in enabled drivers build config 00:01:35.082 net/enetc: not in enabled drivers build config 00:01:35.082 net/enetfec: not in enabled drivers build config 00:01:35.082 net/enic: not in enabled drivers build config 00:01:35.082 net/failsafe: not in enabled drivers build config 00:01:35.082 net/fm10k: not in enabled drivers build config 00:01:35.082 net/gve: not in enabled drivers build config 00:01:35.082 net/hinic: not in enabled drivers build config 00:01:35.082 net/hns3: not in enabled drivers build config 00:01:35.082 net/i40e: not in enabled drivers build config 00:01:35.082 net/iavf: not in enabled drivers build config 00:01:35.082 net/ice: not in enabled drivers build config 00:01:35.082 net/idpf: not in enabled drivers build config 00:01:35.082 net/igc: not in enabled drivers build config 00:01:35.082 net/ionic: not in enabled drivers build config 00:01:35.082 net/ipn3ke: not in enabled drivers build config 00:01:35.082 net/ixgbe: not in enabled drivers build config 00:01:35.082 net/mana: not in enabled drivers build config 00:01:35.082 net/memif: not in enabled drivers build config 00:01:35.082 net/mlx4: not in enabled drivers build config 00:01:35.082 net/mlx5: not in enabled drivers build config 00:01:35.082 net/mvneta: not in enabled drivers build config 00:01:35.082 net/mvpp2: not in enabled drivers build config 00:01:35.082 net/netvsc: not in enabled drivers build config 00:01:35.082 net/nfb: not in enabled drivers build config 00:01:35.082 net/nfp: not in enabled drivers build config 00:01:35.082 net/ngbe: not in enabled drivers build config 00:01:35.082 net/null: not in enabled drivers build config 00:01:35.082 net/octeontx: not in enabled drivers build config 00:01:35.082 net/octeon_ep: not in enabled drivers build config 00:01:35.082 net/pcap: not in enabled drivers build config 00:01:35.082 net/pfe: not in enabled drivers build config 00:01:35.082 net/qede: not in enabled drivers build config 00:01:35.082 net/ring: not in enabled drivers build config 00:01:35.082 net/sfc: not in enabled drivers build config 00:01:35.082 net/softnic: not in enabled drivers build config 00:01:35.082 net/tap: not in enabled drivers build config 00:01:35.082 net/thunderx: not in enabled drivers build config 00:01:35.082 net/txgbe: not in enabled drivers build config 00:01:35.082 net/vdev_netvsc: not in enabled drivers build config 00:01:35.082 net/vhost: not in enabled drivers build config 00:01:35.082 net/virtio: not in enabled drivers build config 00:01:35.082 net/vmxnet3: not in enabled drivers build config 00:01:35.082 raw/*: missing internal dependency, "rawdev" 00:01:35.082 crypto/armv8: not in enabled drivers build config 00:01:35.082 crypto/bcmfs: not in enabled drivers build config 00:01:35.082 crypto/caam_jr: not in enabled drivers build config 00:01:35.082 crypto/ccp: not in enabled drivers build config 00:01:35.082 crypto/cnxk: not in enabled drivers build config 00:01:35.082 crypto/dpaa_sec: not in enabled drivers build config 00:01:35.082 crypto/dpaa2_sec: not in enabled drivers build config 00:01:35.082 crypto/ipsec_mb: not in enabled drivers build config 00:01:35.082 crypto/mlx5: not in enabled drivers build config 00:01:35.082 crypto/mvsam: not in enabled drivers build config 00:01:35.082 crypto/nitrox: not in enabled drivers build config 00:01:35.082 crypto/null: not in enabled drivers build config 00:01:35.082 crypto/octeontx: not in enabled drivers build config 00:01:35.082 crypto/openssl: not in enabled drivers build config 00:01:35.082 crypto/scheduler: not in enabled drivers build config 00:01:35.082 crypto/uadk: not in enabled drivers build config 00:01:35.082 crypto/virtio: not in enabled drivers build config 00:01:35.082 compress/isal: not in enabled drivers build config 00:01:35.082 compress/mlx5: not in enabled drivers build config 00:01:35.082 compress/nitrox: not in enabled drivers build config 00:01:35.082 compress/octeontx: not in enabled drivers build config 00:01:35.082 compress/zlib: not in enabled drivers build config 00:01:35.082 regex/*: missing internal dependency, "regexdev" 00:01:35.082 ml/*: missing internal dependency, "mldev" 00:01:35.082 vdpa/ifc: not in enabled drivers build config 00:01:35.082 vdpa/mlx5: not in enabled drivers build config 00:01:35.082 vdpa/nfp: not in enabled drivers build config 00:01:35.082 vdpa/sfc: not in enabled drivers build config 00:01:35.082 event/*: missing internal dependency, "eventdev" 00:01:35.082 baseband/*: missing internal dependency, "bbdev" 00:01:35.082 gpu/*: missing internal dependency, "gpudev" 00:01:35.082 00:01:35.082 00:01:35.082 Build targets in project: 84 00:01:35.082 00:01:35.082 DPDK 24.03.0 00:01:35.082 00:01:35.082 User defined options 00:01:35.082 buildtype : debug 00:01:35.082 default_library : shared 00:01:35.082 libdir : lib 00:01:35.082 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:35.082 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:35.082 c_link_args : 00:01:35.082 cpu_instruction_set: native 00:01:35.082 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:35.082 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:35.082 enable_docs : false 00:01:35.082 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:35.082 enable_kmods : false 00:01:35.082 max_lcores : 128 00:01:35.082 tests : false 00:01:35.082 00:01:35.082 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.082 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:35.082 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.082 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.082 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.082 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:35.082 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.082 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.082 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.082 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.082 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.344 [10/267] Linking static target lib/librte_kvargs.a 00:01:35.344 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.344 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:35.344 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.344 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.344 [15/267] Linking static target lib/librte_log.a 00:01:35.344 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.344 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.344 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.344 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:35.344 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.344 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.344 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:35.344 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:35.344 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:35.344 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.344 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:35.344 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:35.344 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.344 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.344 [30/267] Linking static target lib/librte_pci.a 00:01:35.344 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.344 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:35.344 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:35.344 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.344 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.344 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:35.603 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.603 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.603 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.603 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.603 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.603 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.603 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.603 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.603 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.603 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.603 [47/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.603 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.603 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.603 [50/267] Linking static target lib/librte_ring.a 00:01:35.603 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.603 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.603 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.603 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.603 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.603 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.603 [57/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:35.603 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.603 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.603 [60/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.603 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.603 [62/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.603 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.603 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.603 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.603 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.603 [67/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.603 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.603 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.864 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.864 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.864 [72/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.864 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.864 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.864 [75/267] Linking static target lib/librte_cmdline.a 00:01:35.864 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.864 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.864 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.864 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.864 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.864 [81/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:35.864 [82/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:35.864 [83/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.864 [84/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.864 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.864 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.864 [87/267] Linking static target lib/librte_meter.a 00:01:35.864 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.864 [89/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.864 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.864 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.864 [92/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.864 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.864 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.864 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.864 [96/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.864 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.864 [98/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.864 [99/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:35.864 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.864 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.864 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:35.864 [103/267] Linking static target lib/librte_telemetry.a 00:01:35.864 [104/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.864 [105/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.864 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.864 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.864 [108/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.864 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.864 [110/267] Linking static target lib/librte_mempool.a 00:01:35.864 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:35.864 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.864 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.864 [114/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:35.864 [115/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.864 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.864 [117/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.864 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:35.864 [119/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.864 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:35.864 [121/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:35.864 [122/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.864 [123/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.864 [124/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.864 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.864 [126/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.864 [127/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.864 [128/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:35.864 [129/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:35.864 [130/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.864 [131/267] Linking static target lib/librte_timer.a 00:01:35.864 [132/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:35.864 [133/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.864 [134/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.864 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.864 [136/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:35.864 [137/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.864 [138/267] Linking static target lib/librte_dmadev.a 00:01:35.864 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.864 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:35.864 [141/267] Linking static target lib/librte_power.a 00:01:35.864 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.864 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.864 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.864 [145/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:35.865 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.865 [147/267] Linking target lib/librte_log.so.24.1 00:01:35.865 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.865 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:35.865 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.865 [151/267] Linking static target lib/librte_reorder.a 00:01:35.865 [152/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:35.865 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.865 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:35.865 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:35.865 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:35.865 [157/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:35.865 [158/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:35.865 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.865 [160/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.865 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:35.865 [162/267] Linking static target lib/librte_net.a 00:01:35.865 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.865 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.865 [165/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.865 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:35.865 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:35.865 [168/267] Linking static target lib/librte_rcu.a 00:01:35.865 [169/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:35.865 [170/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.865 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.865 [172/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.865 [173/267] Linking static target lib/librte_compressdev.a 00:01:35.865 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:35.865 [175/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.865 [176/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:36.126 [177/267] Linking static target lib/librte_hash.a 00:01:36.126 [178/267] Linking static target lib/librte_security.a 00:01:36.126 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:36.126 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:36.126 [181/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.126 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.126 [183/267] Linking static target lib/librte_eal.a 00:01:36.126 [184/267] Linking static target lib/librte_mbuf.a 00:01:36.126 [185/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:36.126 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.126 [187/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.126 [188/267] Linking target lib/librte_kvargs.so.24.1 00:01:36.126 [189/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.126 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.126 [191/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.126 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.126 [193/267] Linking static target drivers/librte_bus_vdev.a 00:01:36.126 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:36.126 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.126 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.126 [197/267] Linking static target drivers/librte_bus_pci.a 00:01:36.126 [198/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.126 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.126 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.126 [201/267] Linking static target drivers/librte_mempool_ring.a 00:01:36.126 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:36.126 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:36.387 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.387 [205/267] Linking static target lib/librte_cryptodev.a 00:01:36.387 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.387 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:36.387 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.387 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.387 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.387 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.387 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:36.387 [213/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:36.387 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.648 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:36.648 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.648 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.648 [218/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.908 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.908 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.908 [221/267] Linking static target lib/librte_ethdev.a 00:01:36.909 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.909 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.909 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.909 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.909 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.850 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:37.850 [228/267] Linking static target lib/librte_vhost.a 00:01:38.424 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.815 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.424 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.809 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.809 [233/267] Linking target lib/librte_eal.so.24.1 00:01:47.809 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:48.086 [235/267] Linking target lib/librte_ring.so.24.1 00:01:48.086 [236/267] Linking target lib/librte_timer.so.24.1 00:01:48.086 [237/267] Linking target lib/librte_meter.so.24.1 00:01:48.086 [238/267] Linking target lib/librte_pci.so.24.1 00:01:48.086 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:48.086 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:48.086 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:48.086 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:48.086 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:48.086 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:48.086 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:48.086 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:48.086 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:48.086 [248/267] Linking target lib/librte_rcu.so.24.1 00:01:48.345 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:48.345 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:48.345 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:48.345 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:48.606 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:48.606 [254/267] Linking target lib/librte_net.so.24.1 00:01:48.606 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:48.606 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:48.606 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:48.867 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:48.867 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:48.867 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:48.867 [261/267] Linking target lib/librte_hash.so.24.1 00:01:48.867 [262/267] Linking target lib/librte_security.so.24.1 00:01:48.867 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:48.867 [264/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:48.867 [265/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:49.128 [266/267] Linking target lib/librte_power.so.24.1 00:01:49.128 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:49.128 INFO: autodetecting backend as ninja 00:01:49.128 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:50.069 CC lib/ut_mock/mock.o 00:01:50.069 CC lib/log/log.o 00:01:50.069 CC lib/log/log_flags.o 00:01:50.069 CC lib/log/log_deprecated.o 00:01:50.069 CC lib/ut/ut.o 00:01:50.329 LIB libspdk_log.a 00:01:50.329 LIB libspdk_ut_mock.a 00:01:50.329 LIB libspdk_ut.a 00:01:50.329 SO libspdk_log.so.7.0 00:01:50.329 SO libspdk_ut_mock.so.6.0 00:01:50.329 SO libspdk_ut.so.2.0 00:01:50.329 SYMLINK libspdk_log.so 00:01:50.329 SYMLINK libspdk_ut_mock.so 00:01:50.329 SYMLINK libspdk_ut.so 00:01:50.900 CC lib/dma/dma.o 00:01:50.900 CC lib/ioat/ioat.o 00:01:50.900 CXX lib/trace_parser/trace.o 00:01:50.900 CC lib/util/base64.o 00:01:50.900 CC lib/util/bit_array.o 00:01:50.900 CC lib/util/cpuset.o 00:01:50.900 CC lib/util/crc16.o 00:01:50.900 CC lib/util/crc32.o 00:01:50.900 CC lib/util/crc32c.o 00:01:50.900 CC lib/util/crc32_ieee.o 00:01:50.900 CC lib/util/crc64.o 00:01:50.900 CC lib/util/dif.o 00:01:50.900 CC lib/util/fd.o 00:01:50.900 CC lib/util/fd_group.o 00:01:50.900 CC lib/util/file.o 00:01:50.900 CC lib/util/hexlify.o 00:01:50.900 CC lib/util/iov.o 00:01:50.900 CC lib/util/math.o 00:01:50.900 CC lib/util/net.o 00:01:50.900 CC lib/util/pipe.o 00:01:50.900 CC lib/util/string.o 00:01:50.900 CC lib/util/strerror_tls.o 00:01:50.900 CC lib/util/uuid.o 00:01:50.900 CC lib/util/xor.o 00:01:50.900 CC lib/util/zipf.o 00:01:50.900 CC lib/vfio_user/host/vfio_user_pci.o 00:01:50.900 CC lib/vfio_user/host/vfio_user.o 00:01:50.900 LIB libspdk_dma.a 00:01:50.900 SO libspdk_dma.so.4.0 00:01:51.160 LIB libspdk_ioat.a 00:01:51.160 SYMLINK libspdk_dma.so 00:01:51.160 SO libspdk_ioat.so.7.0 00:01:51.160 SYMLINK libspdk_ioat.so 00:01:51.160 LIB libspdk_vfio_user.a 00:01:51.160 SO libspdk_vfio_user.so.5.0 00:01:51.160 LIB libspdk_util.a 00:01:51.422 SYMLINK libspdk_vfio_user.so 00:01:51.422 SO libspdk_util.so.9.1 00:01:51.422 SYMLINK libspdk_util.so 00:01:51.683 LIB libspdk_trace_parser.a 00:01:51.683 SO libspdk_trace_parser.so.5.0 00:01:51.683 SYMLINK libspdk_trace_parser.so 00:01:51.944 CC lib/env_dpdk/env.o 00:01:51.944 CC lib/env_dpdk/memory.o 00:01:51.944 CC lib/env_dpdk/pci.o 00:01:51.944 CC lib/env_dpdk/init.o 00:01:51.944 CC lib/env_dpdk/threads.o 00:01:51.944 CC lib/env_dpdk/pci_ioat.o 00:01:51.944 CC lib/env_dpdk/pci_virtio.o 00:01:51.944 CC lib/env_dpdk/pci_vmd.o 00:01:51.944 CC lib/env_dpdk/pci_idxd.o 00:01:51.944 CC lib/env_dpdk/pci_dpdk.o 00:01:51.944 CC lib/env_dpdk/pci_event.o 00:01:51.944 CC lib/env_dpdk/sigbus_handler.o 00:01:51.944 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:51.944 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:51.944 CC lib/conf/conf.o 00:01:51.944 CC lib/json/json_parse.o 00:01:51.944 CC lib/json/json_util.o 00:01:51.944 CC lib/json/json_write.o 00:01:51.944 CC lib/vmd/vmd.o 00:01:51.944 CC lib/vmd/led.o 00:01:51.944 CC lib/rdma_utils/rdma_utils.o 00:01:51.944 CC lib/idxd/idxd.o 00:01:51.944 CC lib/idxd/idxd_user.o 00:01:51.944 CC lib/idxd/idxd_kernel.o 00:01:51.944 CC lib/rdma_provider/common.o 00:01:51.944 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:52.206 LIB libspdk_rdma_provider.a 00:01:52.206 LIB libspdk_conf.a 00:01:52.206 SO libspdk_rdma_provider.so.6.0 00:01:52.206 SO libspdk_conf.so.6.0 00:01:52.206 LIB libspdk_rdma_utils.a 00:01:52.206 LIB libspdk_json.a 00:01:52.206 SYMLINK libspdk_rdma_provider.so 00:01:52.206 SYMLINK libspdk_conf.so 00:01:52.206 SO libspdk_rdma_utils.so.1.0 00:01:52.206 SO libspdk_json.so.6.0 00:01:52.206 SYMLINK libspdk_rdma_utils.so 00:01:52.206 SYMLINK libspdk_json.so 00:01:52.467 LIB libspdk_idxd.a 00:01:52.467 SO libspdk_idxd.so.12.0 00:01:52.467 LIB libspdk_vmd.a 00:01:52.467 SO libspdk_vmd.so.6.0 00:01:52.467 SYMLINK libspdk_idxd.so 00:01:52.468 SYMLINK libspdk_vmd.so 00:01:52.729 CC lib/jsonrpc/jsonrpc_server.o 00:01:52.729 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:52.729 CC lib/jsonrpc/jsonrpc_client.o 00:01:52.729 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:52.990 LIB libspdk_jsonrpc.a 00:01:52.990 SO libspdk_jsonrpc.so.6.0 00:01:52.990 SYMLINK libspdk_jsonrpc.so 00:01:52.990 LIB libspdk_env_dpdk.a 00:01:53.251 SO libspdk_env_dpdk.so.15.0 00:01:53.251 SYMLINK libspdk_env_dpdk.so 00:01:53.251 CC lib/rpc/rpc.o 00:01:53.511 LIB libspdk_rpc.a 00:01:53.511 SO libspdk_rpc.so.6.0 00:01:53.773 SYMLINK libspdk_rpc.so 00:01:54.033 CC lib/keyring/keyring.o 00:01:54.033 CC lib/keyring/keyring_rpc.o 00:01:54.033 CC lib/trace/trace.o 00:01:54.033 CC lib/trace/trace_flags.o 00:01:54.033 CC lib/trace/trace_rpc.o 00:01:54.033 CC lib/notify/notify.o 00:01:54.033 CC lib/notify/notify_rpc.o 00:01:54.293 LIB libspdk_notify.a 00:01:54.293 LIB libspdk_keyring.a 00:01:54.293 SO libspdk_notify.so.6.0 00:01:54.293 SO libspdk_keyring.so.1.0 00:01:54.293 LIB libspdk_trace.a 00:01:54.293 SYMLINK libspdk_notify.so 00:01:54.293 SO libspdk_trace.so.10.0 00:01:54.293 SYMLINK libspdk_keyring.so 00:01:54.293 SYMLINK libspdk_trace.so 00:01:54.862 CC lib/sock/sock.o 00:01:54.862 CC lib/sock/sock_rpc.o 00:01:54.862 CC lib/thread/thread.o 00:01:54.862 CC lib/thread/iobuf.o 00:01:55.122 LIB libspdk_sock.a 00:01:55.122 SO libspdk_sock.so.10.0 00:01:55.122 SYMLINK libspdk_sock.so 00:01:55.383 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.383 CC lib/nvme/nvme_ctrlr.o 00:01:55.383 CC lib/nvme/nvme_ns_cmd.o 00:01:55.383 CC lib/nvme/nvme_fabric.o 00:01:55.383 CC lib/nvme/nvme_ns.o 00:01:55.383 CC lib/nvme/nvme_pcie_common.o 00:01:55.383 CC lib/nvme/nvme_pcie.o 00:01:55.383 CC lib/nvme/nvme_qpair.o 00:01:55.383 CC lib/nvme/nvme.o 00:01:55.644 CC lib/nvme/nvme_quirks.o 00:01:55.644 CC lib/nvme/nvme_transport.o 00:01:55.644 CC lib/nvme/nvme_discovery.o 00:01:55.644 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:55.644 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:55.644 CC lib/nvme/nvme_tcp.o 00:01:55.644 CC lib/nvme/nvme_opal.o 00:01:55.644 CC lib/nvme/nvme_io_msg.o 00:01:55.644 CC lib/nvme/nvme_poll_group.o 00:01:55.644 CC lib/nvme/nvme_zns.o 00:01:55.644 CC lib/nvme/nvme_stubs.o 00:01:55.644 CC lib/nvme/nvme_auth.o 00:01:55.644 CC lib/nvme/nvme_rdma.o 00:01:55.644 CC lib/nvme/nvme_cuse.o 00:01:55.644 CC lib/nvme/nvme_vfio_user.o 00:01:55.906 LIB libspdk_thread.a 00:01:56.167 SO libspdk_thread.so.10.1 00:01:56.167 SYMLINK libspdk_thread.so 00:01:56.427 CC lib/blob/blobstore.o 00:01:56.427 CC lib/blob/request.o 00:01:56.427 CC lib/blob/zeroes.o 00:01:56.427 CC lib/blob/blob_bs_dev.o 00:01:56.427 CC lib/virtio/virtio.o 00:01:56.427 CC lib/virtio/virtio_vhost_user.o 00:01:56.427 CC lib/virtio/virtio_vfio_user.o 00:01:56.427 CC lib/virtio/virtio_pci.o 00:01:56.427 CC lib/vfu_tgt/tgt_endpoint.o 00:01:56.427 CC lib/vfu_tgt/tgt_rpc.o 00:01:56.427 CC lib/accel/accel.o 00:01:56.427 CC lib/accel/accel_sw.o 00:01:56.427 CC lib/accel/accel_rpc.o 00:01:56.427 CC lib/init/json_config.o 00:01:56.427 CC lib/init/subsystem.o 00:01:56.427 CC lib/init/subsystem_rpc.o 00:01:56.427 CC lib/init/rpc.o 00:01:56.712 LIB libspdk_init.a 00:01:56.712 LIB libspdk_virtio.a 00:01:56.712 SO libspdk_init.so.5.0 00:01:56.712 LIB libspdk_vfu_tgt.a 00:01:56.974 SO libspdk_virtio.so.7.0 00:01:56.974 SO libspdk_vfu_tgt.so.3.0 00:01:56.974 SYMLINK libspdk_init.so 00:01:56.974 SYMLINK libspdk_virtio.so 00:01:56.974 SYMLINK libspdk_vfu_tgt.so 00:01:57.235 CC lib/event/reactor.o 00:01:57.235 CC lib/event/app.o 00:01:57.235 CC lib/event/log_rpc.o 00:01:57.235 CC lib/event/app_rpc.o 00:01:57.235 CC lib/event/scheduler_static.o 00:01:57.496 LIB libspdk_accel.a 00:01:57.496 LIB libspdk_nvme.a 00:01:57.496 SO libspdk_accel.so.15.1 00:01:57.496 SYMLINK libspdk_accel.so 00:01:57.496 SO libspdk_nvme.so.13.1 00:01:57.496 LIB libspdk_event.a 00:01:57.758 SO libspdk_event.so.14.0 00:01:57.758 SYMLINK libspdk_event.so 00:01:57.758 SYMLINK libspdk_nvme.so 00:01:57.758 CC lib/bdev/bdev.o 00:01:57.758 CC lib/bdev/bdev_rpc.o 00:01:57.758 CC lib/bdev/bdev_zone.o 00:01:57.758 CC lib/bdev/part.o 00:01:57.758 CC lib/bdev/scsi_nvme.o 00:01:59.147 LIB libspdk_blob.a 00:01:59.147 SO libspdk_blob.so.11.0 00:01:59.147 SYMLINK libspdk_blob.so 00:01:59.408 CC lib/blobfs/blobfs.o 00:01:59.408 CC lib/blobfs/tree.o 00:01:59.408 CC lib/lvol/lvol.o 00:01:59.979 LIB libspdk_bdev.a 00:01:59.979 SO libspdk_bdev.so.15.1 00:02:00.240 LIB libspdk_blobfs.a 00:02:00.240 SYMLINK libspdk_bdev.so 00:02:00.240 SO libspdk_blobfs.so.10.0 00:02:00.240 LIB libspdk_lvol.a 00:02:00.240 SYMLINK libspdk_blobfs.so 00:02:00.240 SO libspdk_lvol.so.10.0 00:02:00.502 SYMLINK libspdk_lvol.so 00:02:00.502 CC lib/scsi/dev.o 00:02:00.502 CC lib/scsi/lun.o 00:02:00.502 CC lib/scsi/port.o 00:02:00.502 CC lib/scsi/scsi.o 00:02:00.502 CC lib/scsi/scsi_pr.o 00:02:00.502 CC lib/scsi/scsi_bdev.o 00:02:00.502 CC lib/scsi/scsi_rpc.o 00:02:00.502 CC lib/scsi/task.o 00:02:00.502 CC lib/nbd/nbd.o 00:02:00.502 CC lib/nbd/nbd_rpc.o 00:02:00.502 CC lib/nvmf/ctrlr.o 00:02:00.502 CC lib/nvmf/ctrlr_discovery.o 00:02:00.502 CC lib/nvmf/ctrlr_bdev.o 00:02:00.502 CC lib/ftl/ftl_core.o 00:02:00.502 CC lib/ftl/ftl_init.o 00:02:00.502 CC lib/nvmf/subsystem.o 00:02:00.502 CC lib/nvmf/nvmf.o 00:02:00.502 CC lib/ftl/ftl_layout.o 00:02:00.502 CC lib/nvmf/nvmf_rpc.o 00:02:00.502 CC lib/ftl/ftl_debug.o 00:02:00.502 CC lib/nvmf/transport.o 00:02:00.502 CC lib/ftl/ftl_io.o 00:02:00.502 CC lib/nvmf/tcp.o 00:02:00.502 CC lib/ftl/ftl_sb.o 00:02:00.502 CC lib/nvmf/stubs.o 00:02:00.502 CC lib/ftl/ftl_l2p.o 00:02:00.502 CC lib/ublk/ublk.o 00:02:00.502 CC lib/nvmf/mdns_server.o 00:02:00.502 CC lib/ftl/ftl_l2p_flat.o 00:02:00.502 CC lib/nvmf/vfio_user.o 00:02:00.502 CC lib/ublk/ublk_rpc.o 00:02:00.502 CC lib/ftl/ftl_nv_cache.o 00:02:00.502 CC lib/ftl/ftl_band.o 00:02:00.502 CC lib/nvmf/rdma.o 00:02:00.502 CC lib/ftl/ftl_band_ops.o 00:02:00.502 CC lib/nvmf/auth.o 00:02:00.502 CC lib/ftl/ftl_writer.o 00:02:00.502 CC lib/ftl/ftl_rq.o 00:02:00.502 CC lib/ftl/ftl_reloc.o 00:02:00.502 CC lib/ftl/ftl_l2p_cache.o 00:02:00.502 CC lib/ftl/ftl_p2l.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:00.502 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:00.502 CC lib/ftl/utils/ftl_conf.o 00:02:00.502 CC lib/ftl/utils/ftl_md.o 00:02:00.502 CC lib/ftl/utils/ftl_mempool.o 00:02:00.502 CC lib/ftl/utils/ftl_bitmap.o 00:02:00.502 CC lib/ftl/utils/ftl_property.o 00:02:00.502 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:00.502 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:00.502 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:00.502 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:00.502 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:00.502 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:00.502 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:00.502 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:00.502 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:00.502 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:00.502 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:00.761 CC lib/ftl/base/ftl_base_dev.o 00:02:00.761 CC lib/ftl/base/ftl_base_bdev.o 00:02:00.761 CC lib/ftl/ftl_trace.o 00:02:01.021 LIB libspdk_nbd.a 00:02:01.021 SO libspdk_nbd.so.7.0 00:02:01.021 LIB libspdk_scsi.a 00:02:01.021 SO libspdk_scsi.so.9.0 00:02:01.303 SYMLINK libspdk_nbd.so 00:02:01.303 LIB libspdk_ublk.a 00:02:01.303 SYMLINK libspdk_scsi.so 00:02:01.303 SO libspdk_ublk.so.3.0 00:02:01.303 SYMLINK libspdk_ublk.so 00:02:01.649 LIB libspdk_ftl.a 00:02:01.649 CC lib/iscsi/conn.o 00:02:01.649 CC lib/iscsi/init_grp.o 00:02:01.649 CC lib/iscsi/iscsi.o 00:02:01.649 CC lib/iscsi/md5.o 00:02:01.649 CC lib/iscsi/param.o 00:02:01.649 CC lib/iscsi/portal_grp.o 00:02:01.649 CC lib/iscsi/tgt_node.o 00:02:01.649 CC lib/iscsi/iscsi_subsystem.o 00:02:01.649 CC lib/iscsi/iscsi_rpc.o 00:02:01.649 CC lib/iscsi/task.o 00:02:01.649 CC lib/vhost/vhost.o 00:02:01.649 CC lib/vhost/vhost_rpc.o 00:02:01.649 CC lib/vhost/vhost_scsi.o 00:02:01.649 CC lib/vhost/vhost_blk.o 00:02:01.649 CC lib/vhost/rte_vhost_user.o 00:02:01.649 SO libspdk_ftl.so.9.0 00:02:01.917 SYMLINK libspdk_ftl.so 00:02:02.491 LIB libspdk_nvmf.a 00:02:02.491 SO libspdk_nvmf.so.19.0 00:02:02.491 LIB libspdk_vhost.a 00:02:02.491 SO libspdk_vhost.so.8.0 00:02:02.752 SYMLINK libspdk_nvmf.so 00:02:02.752 LIB libspdk_iscsi.a 00:02:02.752 SYMLINK libspdk_vhost.so 00:02:02.752 SO libspdk_iscsi.so.8.0 00:02:03.014 SYMLINK libspdk_iscsi.so 00:02:03.585 CC module/env_dpdk/env_dpdk_rpc.o 00:02:03.585 CC module/vfu_device/vfu_virtio.o 00:02:03.585 CC module/vfu_device/vfu_virtio_blk.o 00:02:03.585 CC module/vfu_device/vfu_virtio_rpc.o 00:02:03.585 CC module/vfu_device/vfu_virtio_scsi.o 00:02:03.585 LIB libspdk_env_dpdk_rpc.a 00:02:03.585 CC module/keyring/linux/keyring.o 00:02:03.585 CC module/keyring/linux/keyring_rpc.o 00:02:03.585 CC module/keyring/file/keyring.o 00:02:03.585 CC module/keyring/file/keyring_rpc.o 00:02:03.585 CC module/blob/bdev/blob_bdev.o 00:02:03.585 CC module/accel/iaa/accel_iaa.o 00:02:03.585 CC module/sock/posix/posix.o 00:02:03.585 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:03.585 CC module/accel/dsa/accel_dsa.o 00:02:03.585 CC module/accel/iaa/accel_iaa_rpc.o 00:02:03.585 CC module/accel/dsa/accel_dsa_rpc.o 00:02:03.585 CC module/scheduler/gscheduler/gscheduler.o 00:02:03.585 CC module/accel/error/accel_error_rpc.o 00:02:03.585 CC module/accel/error/accel_error.o 00:02:03.585 CC module/accel/ioat/accel_ioat.o 00:02:03.585 CC module/accel/ioat/accel_ioat_rpc.o 00:02:03.585 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:03.585 SO libspdk_env_dpdk_rpc.so.6.0 00:02:03.845 SYMLINK libspdk_env_dpdk_rpc.so 00:02:03.845 LIB libspdk_keyring_linux.a 00:02:03.845 LIB libspdk_keyring_file.a 00:02:03.845 LIB libspdk_scheduler_gscheduler.a 00:02:03.845 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.846 SO libspdk_keyring_linux.so.1.0 00:02:03.846 SO libspdk_keyring_file.so.1.0 00:02:03.846 LIB libspdk_accel_error.a 00:02:03.846 LIB libspdk_scheduler_dynamic.a 00:02:03.846 SO libspdk_scheduler_gscheduler.so.4.0 00:02:03.846 LIB libspdk_accel_ioat.a 00:02:03.846 LIB libspdk_accel_iaa.a 00:02:03.846 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:03.846 SO libspdk_scheduler_dynamic.so.4.0 00:02:03.846 SO libspdk_accel_error.so.2.0 00:02:03.846 SO libspdk_accel_iaa.so.3.0 00:02:03.846 SYMLINK libspdk_keyring_linux.so 00:02:03.846 SYMLINK libspdk_keyring_file.so 00:02:03.846 SO libspdk_accel_ioat.so.6.0 00:02:03.846 LIB libspdk_accel_dsa.a 00:02:03.846 LIB libspdk_blob_bdev.a 00:02:03.846 SYMLINK libspdk_scheduler_gscheduler.so 00:02:03.846 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:04.106 SYMLINK libspdk_scheduler_dynamic.so 00:02:04.106 SYMLINK libspdk_accel_iaa.so 00:02:04.106 SO libspdk_accel_dsa.so.5.0 00:02:04.106 SYMLINK libspdk_accel_ioat.so 00:02:04.106 SYMLINK libspdk_accel_error.so 00:02:04.106 SO libspdk_blob_bdev.so.11.0 00:02:04.106 SYMLINK libspdk_accel_dsa.so 00:02:04.106 LIB libspdk_vfu_device.a 00:02:04.106 SYMLINK libspdk_blob_bdev.so 00:02:04.106 SO libspdk_vfu_device.so.3.0 00:02:04.106 SYMLINK libspdk_vfu_device.so 00:02:04.367 LIB libspdk_sock_posix.a 00:02:04.367 SO libspdk_sock_posix.so.6.0 00:02:04.367 SYMLINK libspdk_sock_posix.so 00:02:04.631 CC module/bdev/gpt/gpt.o 00:02:04.631 CC module/bdev/nvme/bdev_nvme.o 00:02:04.631 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:04.631 CC module/bdev/gpt/vbdev_gpt.o 00:02:04.631 CC module/bdev/nvme/nvme_rpc.o 00:02:04.631 CC module/bdev/nvme/vbdev_opal.o 00:02:04.631 CC module/bdev/nvme/bdev_mdns_client.o 00:02:04.631 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:04.631 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:04.631 CC module/bdev/malloc/bdev_malloc.o 00:02:04.631 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:04.631 CC module/bdev/null/bdev_null.o 00:02:04.631 CC module/bdev/aio/bdev_aio_rpc.o 00:02:04.631 CC module/bdev/aio/bdev_aio.o 00:02:04.631 CC module/blobfs/bdev/blobfs_bdev.o 00:02:04.631 CC module/bdev/null/bdev_null_rpc.o 00:02:04.631 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:04.631 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:04.631 CC module/bdev/error/vbdev_error.o 00:02:04.631 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:04.631 CC module/bdev/delay/vbdev_delay.o 00:02:04.631 CC module/bdev/error/vbdev_error_rpc.o 00:02:04.631 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:04.631 CC module/bdev/iscsi/bdev_iscsi.o 00:02:04.631 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:04.631 CC module/bdev/ftl/bdev_ftl.o 00:02:04.631 CC module/bdev/raid/bdev_raid.o 00:02:04.631 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:04.631 CC module/bdev/lvol/vbdev_lvol.o 00:02:04.631 CC module/bdev/passthru/vbdev_passthru.o 00:02:04.631 CC module/bdev/raid/bdev_raid_sb.o 00:02:04.631 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:04.631 CC module/bdev/raid/raid0.o 00:02:04.631 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:04.631 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:04.631 CC module/bdev/raid/bdev_raid_rpc.o 00:02:04.631 CC module/bdev/raid/raid1.o 00:02:04.631 CC module/bdev/split/vbdev_split.o 00:02:04.631 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:04.631 CC module/bdev/raid/concat.o 00:02:04.631 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:04.631 CC module/bdev/split/vbdev_split_rpc.o 00:02:04.891 LIB libspdk_blobfs_bdev.a 00:02:04.891 SO libspdk_blobfs_bdev.so.6.0 00:02:04.891 LIB libspdk_bdev_gpt.a 00:02:04.891 LIB libspdk_bdev_null.a 00:02:04.892 LIB libspdk_bdev_error.a 00:02:04.892 LIB libspdk_bdev_split.a 00:02:04.892 SO libspdk_bdev_gpt.so.6.0 00:02:04.892 SO libspdk_bdev_null.so.6.0 00:02:04.892 SYMLINK libspdk_blobfs_bdev.so 00:02:04.892 SO libspdk_bdev_error.so.6.0 00:02:04.892 SO libspdk_bdev_split.so.6.0 00:02:04.892 LIB libspdk_bdev_ftl.a 00:02:04.892 LIB libspdk_bdev_zone_block.a 00:02:04.892 LIB libspdk_bdev_passthru.a 00:02:04.892 LIB libspdk_bdev_delay.a 00:02:04.892 SYMLINK libspdk_bdev_gpt.so 00:02:04.892 LIB libspdk_bdev_malloc.a 00:02:04.892 SO libspdk_bdev_ftl.so.6.0 00:02:04.892 LIB libspdk_bdev_iscsi.a 00:02:04.892 LIB libspdk_bdev_aio.a 00:02:04.892 SYMLINK libspdk_bdev_null.so 00:02:04.892 SO libspdk_bdev_zone_block.so.6.0 00:02:05.151 SO libspdk_bdev_passthru.so.6.0 00:02:05.151 SO libspdk_bdev_iscsi.so.6.0 00:02:05.151 SYMLINK libspdk_bdev_error.so 00:02:05.151 SO libspdk_bdev_delay.so.6.0 00:02:05.151 SO libspdk_bdev_malloc.so.6.0 00:02:05.151 SYMLINK libspdk_bdev_split.so 00:02:05.151 SO libspdk_bdev_aio.so.6.0 00:02:05.151 SYMLINK libspdk_bdev_ftl.so 00:02:05.151 SYMLINK libspdk_bdev_zone_block.so 00:02:05.151 SYMLINK libspdk_bdev_passthru.so 00:02:05.151 SYMLINK libspdk_bdev_malloc.so 00:02:05.151 SYMLINK libspdk_bdev_delay.so 00:02:05.151 SYMLINK libspdk_bdev_iscsi.so 00:02:05.151 SYMLINK libspdk_bdev_aio.so 00:02:05.151 LIB libspdk_bdev_virtio.a 00:02:05.151 LIB libspdk_bdev_lvol.a 00:02:05.151 SO libspdk_bdev_virtio.so.6.0 00:02:05.151 SO libspdk_bdev_lvol.so.6.0 00:02:05.151 SYMLINK libspdk_bdev_lvol.so 00:02:05.151 SYMLINK libspdk_bdev_virtio.so 00:02:05.412 LIB libspdk_bdev_raid.a 00:02:05.672 SO libspdk_bdev_raid.so.6.0 00:02:05.672 SYMLINK libspdk_bdev_raid.so 00:02:06.613 LIB libspdk_bdev_nvme.a 00:02:06.613 SO libspdk_bdev_nvme.so.7.0 00:02:06.613 SYMLINK libspdk_bdev_nvme.so 00:02:07.556 CC module/event/subsystems/sock/sock.o 00:02:07.556 CC module/event/subsystems/iobuf/iobuf.o 00:02:07.556 CC module/event/subsystems/scheduler/scheduler.o 00:02:07.556 CC module/event/subsystems/vmd/vmd.o 00:02:07.556 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:07.556 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:07.556 CC module/event/subsystems/keyring/keyring.o 00:02:07.556 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:07.556 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:07.556 LIB libspdk_event_keyring.a 00:02:07.556 LIB libspdk_event_vfu_tgt.a 00:02:07.556 LIB libspdk_event_sock.a 00:02:07.556 LIB libspdk_event_scheduler.a 00:02:07.556 LIB libspdk_event_vhost_blk.a 00:02:07.556 LIB libspdk_event_vmd.a 00:02:07.556 LIB libspdk_event_iobuf.a 00:02:07.556 SO libspdk_event_keyring.so.1.0 00:02:07.556 SO libspdk_event_vfu_tgt.so.3.0 00:02:07.556 SO libspdk_event_vhost_blk.so.3.0 00:02:07.556 SO libspdk_event_sock.so.5.0 00:02:07.556 SO libspdk_event_scheduler.so.4.0 00:02:07.556 SO libspdk_event_vmd.so.6.0 00:02:07.556 SO libspdk_event_iobuf.so.3.0 00:02:07.556 SYMLINK libspdk_event_keyring.so 00:02:07.556 SYMLINK libspdk_event_vfu_tgt.so 00:02:07.556 SYMLINK libspdk_event_iobuf.so 00:02:07.817 SYMLINK libspdk_event_vhost_blk.so 00:02:07.817 SYMLINK libspdk_event_sock.so 00:02:07.817 SYMLINK libspdk_event_scheduler.so 00:02:07.817 SYMLINK libspdk_event_vmd.so 00:02:08.078 CC module/event/subsystems/accel/accel.o 00:02:08.078 LIB libspdk_event_accel.a 00:02:08.338 SO libspdk_event_accel.so.6.0 00:02:08.338 SYMLINK libspdk_event_accel.so 00:02:08.599 CC module/event/subsystems/bdev/bdev.o 00:02:08.859 LIB libspdk_event_bdev.a 00:02:08.859 SO libspdk_event_bdev.so.6.0 00:02:08.859 SYMLINK libspdk_event_bdev.so 00:02:09.120 CC module/event/subsystems/nbd/nbd.o 00:02:09.381 CC module/event/subsystems/scsi/scsi.o 00:02:09.381 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:09.381 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:09.381 CC module/event/subsystems/ublk/ublk.o 00:02:09.381 LIB libspdk_event_nbd.a 00:02:09.381 SO libspdk_event_nbd.so.6.0 00:02:09.381 LIB libspdk_event_ublk.a 00:02:09.381 LIB libspdk_event_scsi.a 00:02:09.381 SO libspdk_event_ublk.so.3.0 00:02:09.381 SO libspdk_event_scsi.so.6.0 00:02:09.381 LIB libspdk_event_nvmf.a 00:02:09.381 SYMLINK libspdk_event_nbd.so 00:02:09.643 SYMLINK libspdk_event_ublk.so 00:02:09.643 SO libspdk_event_nvmf.so.6.0 00:02:09.643 SYMLINK libspdk_event_scsi.so 00:02:09.643 SYMLINK libspdk_event_nvmf.so 00:02:09.903 CC module/event/subsystems/iscsi/iscsi.o 00:02:09.903 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:10.163 LIB libspdk_event_iscsi.a 00:02:10.163 LIB libspdk_event_vhost_scsi.a 00:02:10.163 SO libspdk_event_iscsi.so.6.0 00:02:10.163 SO libspdk_event_vhost_scsi.so.3.0 00:02:10.163 SYMLINK libspdk_event_iscsi.so 00:02:10.163 SYMLINK libspdk_event_vhost_scsi.so 00:02:10.423 SO libspdk.so.6.0 00:02:10.423 SYMLINK libspdk.so 00:02:10.685 TEST_HEADER include/spdk/accel.h 00:02:10.685 TEST_HEADER include/spdk/accel_module.h 00:02:10.685 TEST_HEADER include/spdk/assert.h 00:02:10.685 TEST_HEADER include/spdk/barrier.h 00:02:10.685 TEST_HEADER include/spdk/base64.h 00:02:10.685 TEST_HEADER include/spdk/bdev_module.h 00:02:10.685 TEST_HEADER include/spdk/bdev.h 00:02:10.685 TEST_HEADER include/spdk/bdev_zone.h 00:02:10.685 TEST_HEADER include/spdk/bit_pool.h 00:02:10.685 TEST_HEADER include/spdk/bit_array.h 00:02:10.685 CC test/rpc_client/rpc_client_test.o 00:02:10.685 TEST_HEADER include/spdk/blob_bdev.h 00:02:10.685 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:10.685 TEST_HEADER include/spdk/blobfs.h 00:02:10.685 TEST_HEADER include/spdk/blob.h 00:02:10.685 TEST_HEADER include/spdk/conf.h 00:02:10.685 TEST_HEADER include/spdk/config.h 00:02:10.685 TEST_HEADER include/spdk/crc32.h 00:02:10.685 TEST_HEADER include/spdk/cpuset.h 00:02:10.685 CXX app/trace/trace.o 00:02:10.685 TEST_HEADER include/spdk/crc16.h 00:02:10.685 TEST_HEADER include/spdk/crc64.h 00:02:10.685 TEST_HEADER include/spdk/dif.h 00:02:10.685 TEST_HEADER include/spdk/dma.h 00:02:10.685 CC app/spdk_top/spdk_top.o 00:02:10.685 CC app/trace_record/trace_record.o 00:02:10.685 TEST_HEADER include/spdk/endian.h 00:02:10.685 TEST_HEADER include/spdk/env_dpdk.h 00:02:10.685 TEST_HEADER include/spdk/env.h 00:02:10.685 TEST_HEADER include/spdk/event.h 00:02:10.685 TEST_HEADER include/spdk/fd_group.h 00:02:10.685 TEST_HEADER include/spdk/fd.h 00:02:10.685 CC app/spdk_lspci/spdk_lspci.o 00:02:10.685 TEST_HEADER include/spdk/file.h 00:02:10.685 TEST_HEADER include/spdk/ftl.h 00:02:10.685 TEST_HEADER include/spdk/gpt_spec.h 00:02:10.685 TEST_HEADER include/spdk/hexlify.h 00:02:10.685 TEST_HEADER include/spdk/histogram_data.h 00:02:10.685 TEST_HEADER include/spdk/idxd_spec.h 00:02:10.685 CC app/spdk_nvme_identify/identify.o 00:02:10.685 TEST_HEADER include/spdk/idxd.h 00:02:10.685 CC app/spdk_nvme_perf/perf.o 00:02:10.685 TEST_HEADER include/spdk/init.h 00:02:10.685 TEST_HEADER include/spdk/ioat.h 00:02:10.685 CC app/spdk_nvme_discover/discovery_aer.o 00:02:10.685 TEST_HEADER include/spdk/ioat_spec.h 00:02:10.685 TEST_HEADER include/spdk/iscsi_spec.h 00:02:10.685 TEST_HEADER include/spdk/json.h 00:02:10.685 TEST_HEADER include/spdk/keyring.h 00:02:10.685 TEST_HEADER include/spdk/keyring_module.h 00:02:10.685 TEST_HEADER include/spdk/likely.h 00:02:10.685 TEST_HEADER include/spdk/jsonrpc.h 00:02:10.685 TEST_HEADER include/spdk/log.h 00:02:10.685 TEST_HEADER include/spdk/lvol.h 00:02:10.685 TEST_HEADER include/spdk/mmio.h 00:02:10.685 TEST_HEADER include/spdk/memory.h 00:02:10.685 TEST_HEADER include/spdk/nbd.h 00:02:10.685 TEST_HEADER include/spdk/notify.h 00:02:10.685 TEST_HEADER include/spdk/net.h 00:02:10.685 CC app/spdk_dd/spdk_dd.o 00:02:10.685 CC app/nvmf_tgt/nvmf_main.o 00:02:10.685 TEST_HEADER include/spdk/nvme.h 00:02:10.685 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:10.685 TEST_HEADER include/spdk/nvme_intel.h 00:02:10.685 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:10.685 TEST_HEADER include/spdk/nvme_spec.h 00:02:10.685 TEST_HEADER include/spdk/nvme_zns.h 00:02:10.685 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:10.685 TEST_HEADER include/spdk/nvmf.h 00:02:10.685 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:10.685 TEST_HEADER include/spdk/nvmf_spec.h 00:02:10.685 TEST_HEADER include/spdk/nvmf_transport.h 00:02:10.685 TEST_HEADER include/spdk/opal_spec.h 00:02:10.685 TEST_HEADER include/spdk/opal.h 00:02:10.946 TEST_HEADER include/spdk/pci_ids.h 00:02:10.946 TEST_HEADER include/spdk/pipe.h 00:02:10.946 TEST_HEADER include/spdk/queue.h 00:02:10.946 TEST_HEADER include/spdk/reduce.h 00:02:10.946 TEST_HEADER include/spdk/rpc.h 00:02:10.946 TEST_HEADER include/spdk/scsi.h 00:02:10.946 TEST_HEADER include/spdk/scheduler.h 00:02:10.946 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:10.946 TEST_HEADER include/spdk/scsi_spec.h 00:02:10.946 TEST_HEADER include/spdk/sock.h 00:02:10.946 TEST_HEADER include/spdk/stdinc.h 00:02:10.946 TEST_HEADER include/spdk/string.h 00:02:10.946 TEST_HEADER include/spdk/thread.h 00:02:10.946 TEST_HEADER include/spdk/trace.h 00:02:10.946 CC app/iscsi_tgt/iscsi_tgt.o 00:02:10.946 TEST_HEADER include/spdk/trace_parser.h 00:02:10.946 TEST_HEADER include/spdk/tree.h 00:02:10.946 TEST_HEADER include/spdk/ublk.h 00:02:10.946 TEST_HEADER include/spdk/util.h 00:02:10.946 TEST_HEADER include/spdk/uuid.h 00:02:10.946 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:10.946 TEST_HEADER include/spdk/version.h 00:02:10.946 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:10.946 TEST_HEADER include/spdk/vhost.h 00:02:10.946 TEST_HEADER include/spdk/vmd.h 00:02:10.946 TEST_HEADER include/spdk/xor.h 00:02:10.946 TEST_HEADER include/spdk/zipf.h 00:02:10.946 CXX test/cpp_headers/accel.o 00:02:10.946 CXX test/cpp_headers/accel_module.o 00:02:10.946 CXX test/cpp_headers/assert.o 00:02:10.946 CC app/spdk_tgt/spdk_tgt.o 00:02:10.946 CXX test/cpp_headers/barrier.o 00:02:10.946 CXX test/cpp_headers/base64.o 00:02:10.946 CXX test/cpp_headers/bdev.o 00:02:10.946 CXX test/cpp_headers/bdev_zone.o 00:02:10.946 CXX test/cpp_headers/bdev_module.o 00:02:10.946 CXX test/cpp_headers/bit_array.o 00:02:10.946 CXX test/cpp_headers/bit_pool.o 00:02:10.946 CXX test/cpp_headers/blob_bdev.o 00:02:10.946 CXX test/cpp_headers/blobfs.o 00:02:10.946 CXX test/cpp_headers/blobfs_bdev.o 00:02:10.946 CXX test/cpp_headers/blob.o 00:02:10.946 CXX test/cpp_headers/conf.o 00:02:10.946 CXX test/cpp_headers/cpuset.o 00:02:10.946 CXX test/cpp_headers/config.o 00:02:10.946 CXX test/cpp_headers/crc16.o 00:02:10.946 CXX test/cpp_headers/crc32.o 00:02:10.946 CXX test/cpp_headers/crc64.o 00:02:10.946 CXX test/cpp_headers/dma.o 00:02:10.946 CXX test/cpp_headers/dif.o 00:02:10.946 CXX test/cpp_headers/endian.o 00:02:10.946 CXX test/cpp_headers/env_dpdk.o 00:02:10.946 CXX test/cpp_headers/env.o 00:02:10.946 CXX test/cpp_headers/event.o 00:02:10.946 CXX test/cpp_headers/file.o 00:02:10.946 CXX test/cpp_headers/fd_group.o 00:02:10.946 CXX test/cpp_headers/fd.o 00:02:10.946 CXX test/cpp_headers/gpt_spec.o 00:02:10.946 CXX test/cpp_headers/ftl.o 00:02:10.946 CXX test/cpp_headers/hexlify.o 00:02:10.946 CXX test/cpp_headers/idxd.o 00:02:10.946 CXX test/cpp_headers/histogram_data.o 00:02:10.946 CXX test/cpp_headers/init.o 00:02:10.946 CXX test/cpp_headers/idxd_spec.o 00:02:10.946 CXX test/cpp_headers/ioat.o 00:02:10.946 CXX test/cpp_headers/iscsi_spec.o 00:02:10.947 CXX test/cpp_headers/ioat_spec.o 00:02:10.947 CXX test/cpp_headers/json.o 00:02:10.947 CXX test/cpp_headers/keyring_module.o 00:02:10.947 CXX test/cpp_headers/keyring.o 00:02:10.947 CXX test/cpp_headers/jsonrpc.o 00:02:10.947 CXX test/cpp_headers/log.o 00:02:10.947 CXX test/cpp_headers/likely.o 00:02:10.947 CXX test/cpp_headers/lvol.o 00:02:10.947 CXX test/cpp_headers/nbd.o 00:02:10.947 CXX test/cpp_headers/mmio.o 00:02:10.947 CXX test/cpp_headers/memory.o 00:02:10.947 CXX test/cpp_headers/notify.o 00:02:10.947 CXX test/cpp_headers/nvme.o 00:02:10.947 CXX test/cpp_headers/net.o 00:02:10.947 CXX test/cpp_headers/nvme_intel.o 00:02:10.947 CXX test/cpp_headers/nvme_ocssd.o 00:02:10.947 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:10.947 CXX test/cpp_headers/nvme_zns.o 00:02:10.947 CXX test/cpp_headers/nvme_spec.o 00:02:10.947 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:10.947 CXX test/cpp_headers/nvmf_cmd.o 00:02:10.947 CXX test/cpp_headers/nvmf.o 00:02:10.947 CXX test/cpp_headers/nvmf_spec.o 00:02:10.947 CXX test/cpp_headers/opal.o 00:02:10.947 CXX test/cpp_headers/nvmf_transport.o 00:02:10.947 CXX test/cpp_headers/opal_spec.o 00:02:10.947 CXX test/cpp_headers/pipe.o 00:02:10.947 CXX test/cpp_headers/pci_ids.o 00:02:10.947 CC test/app/jsoncat/jsoncat.o 00:02:10.947 CXX test/cpp_headers/queue.o 00:02:10.947 CXX test/cpp_headers/rpc.o 00:02:10.947 CXX test/cpp_headers/reduce.o 00:02:10.947 CXX test/cpp_headers/scheduler.o 00:02:10.947 CXX test/cpp_headers/scsi.o 00:02:10.947 CXX test/cpp_headers/scsi_spec.o 00:02:10.947 CXX test/cpp_headers/stdinc.o 00:02:10.947 CC test/thread/poller_perf/poller_perf.o 00:02:10.947 CXX test/cpp_headers/sock.o 00:02:10.947 CXX test/cpp_headers/string.o 00:02:10.947 CXX test/cpp_headers/thread.o 00:02:10.947 CXX test/cpp_headers/trace.o 00:02:10.947 CXX test/cpp_headers/tree.o 00:02:10.947 CXX test/cpp_headers/trace_parser.o 00:02:10.947 CXX test/cpp_headers/ublk.o 00:02:10.947 CXX test/cpp_headers/util.o 00:02:10.947 CXX test/cpp_headers/uuid.o 00:02:10.947 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:10.947 CXX test/cpp_headers/version.o 00:02:10.947 CXX test/cpp_headers/vfio_user_pci.o 00:02:10.947 CXX test/cpp_headers/vfio_user_spec.o 00:02:10.947 CC test/env/memory/memory_ut.o 00:02:10.947 CXX test/cpp_headers/vhost.o 00:02:10.947 CXX test/cpp_headers/vmd.o 00:02:10.947 CXX test/cpp_headers/xor.o 00:02:10.947 CXX test/cpp_headers/zipf.o 00:02:10.947 CC test/app/histogram_perf/histogram_perf.o 00:02:10.947 CC examples/ioat/perf/perf.o 00:02:10.947 CC test/env/vtophys/vtophys.o 00:02:10.947 CC test/app/stub/stub.o 00:02:10.947 CC test/env/pci/pci_ut.o 00:02:10.947 CC examples/ioat/verify/verify.o 00:02:10.947 CC app/fio/nvme/fio_plugin.o 00:02:10.947 CC examples/util/zipf/zipf.o 00:02:10.947 LINK rpc_client_test 00:02:10.947 LINK spdk_lspci 00:02:10.947 CC test/dma/test_dma/test_dma.o 00:02:10.947 CC test/app/bdev_svc/bdev_svc.o 00:02:11.207 CC app/fio/bdev/fio_plugin.o 00:02:11.207 LINK spdk_nvme_discover 00:02:11.207 LINK spdk_trace_record 00:02:11.207 CC test/env/mem_callbacks/mem_callbacks.o 00:02:11.207 LINK nvmf_tgt 00:02:11.207 LINK interrupt_tgt 00:02:11.207 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:11.207 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:11.207 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:11.465 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:11.465 LINK iscsi_tgt 00:02:11.465 LINK spdk_trace 00:02:11.465 LINK vtophys 00:02:11.465 LINK spdk_dd 00:02:11.465 LINK spdk_tgt 00:02:11.465 LINK jsoncat 00:02:11.465 LINK verify 00:02:11.465 LINK poller_perf 00:02:11.465 LINK env_dpdk_post_init 00:02:11.465 LINK zipf 00:02:11.725 LINK histogram_perf 00:02:11.725 LINK bdev_svc 00:02:11.725 LINK stub 00:02:11.725 LINK ioat_perf 00:02:11.725 CC app/vhost/vhost.o 00:02:11.725 LINK test_dma 00:02:11.986 LINK spdk_bdev 00:02:11.986 LINK nvme_fuzz 00:02:11.986 LINK pci_ut 00:02:11.986 LINK vhost_fuzz 00:02:11.986 LINK spdk_nvme 00:02:11.986 LINK spdk_nvme_identify 00:02:11.986 LINK vhost 00:02:11.986 CC test/event/event_perf/event_perf.o 00:02:11.986 CC test/event/reactor_perf/reactor_perf.o 00:02:11.986 CC test/event/reactor/reactor.o 00:02:11.986 LINK mem_callbacks 00:02:11.986 CC test/event/app_repeat/app_repeat.o 00:02:11.986 CC examples/idxd/perf/perf.o 00:02:11.986 CC examples/sock/hello_world/hello_sock.o 00:02:11.986 CC test/event/scheduler/scheduler.o 00:02:11.986 CC examples/vmd/led/led.o 00:02:11.986 CC examples/vmd/lsvmd/lsvmd.o 00:02:11.986 LINK spdk_nvme_perf 00:02:11.986 CC examples/thread/thread/thread_ex.o 00:02:12.246 LINK spdk_top 00:02:12.246 LINK event_perf 00:02:12.246 LINK reactor_perf 00:02:12.246 LINK reactor 00:02:12.246 LINK lsvmd 00:02:12.246 LINK app_repeat 00:02:12.246 LINK led 00:02:12.246 LINK hello_sock 00:02:12.246 LINK scheduler 00:02:12.506 LINK thread 00:02:12.506 LINK idxd_perf 00:02:12.506 CC test/nvme/sgl/sgl.o 00:02:12.506 CC test/nvme/aer/aer.o 00:02:12.506 CC test/nvme/connect_stress/connect_stress.o 00:02:12.506 CC test/nvme/startup/startup.o 00:02:12.506 CC test/nvme/reset/reset.o 00:02:12.506 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:12.506 CC test/nvme/overhead/overhead.o 00:02:12.506 CC test/nvme/boot_partition/boot_partition.o 00:02:12.506 CC test/nvme/cuse/cuse.o 00:02:12.506 CC test/nvme/e2edp/nvme_dp.o 00:02:12.506 CC test/nvme/compliance/nvme_compliance.o 00:02:12.506 CC test/nvme/fused_ordering/fused_ordering.o 00:02:12.506 CC test/nvme/err_injection/err_injection.o 00:02:12.506 CC test/nvme/simple_copy/simple_copy.o 00:02:12.506 CC test/nvme/reserve/reserve.o 00:02:12.506 CC test/nvme/fdp/fdp.o 00:02:12.506 CC test/blobfs/mkfs/mkfs.o 00:02:12.506 CC test/accel/dif/dif.o 00:02:12.506 LINK memory_ut 00:02:12.506 CC test/lvol/esnap/esnap.o 00:02:12.506 LINK boot_partition 00:02:12.506 LINK startup 00:02:12.767 LINK doorbell_aers 00:02:12.767 LINK connect_stress 00:02:12.767 LINK err_injection 00:02:12.767 LINK fused_ordering 00:02:12.767 LINK sgl 00:02:12.767 LINK reserve 00:02:12.767 LINK simple_copy 00:02:12.767 LINK mkfs 00:02:12.767 LINK aer 00:02:12.767 LINK reset 00:02:12.767 LINK nvme_dp 00:02:12.767 LINK overhead 00:02:12.767 LINK fdp 00:02:12.767 LINK nvme_compliance 00:02:12.767 CC examples/nvme/arbitration/arbitration.o 00:02:12.767 CC examples/nvme/reconnect/reconnect.o 00:02:12.767 CC examples/nvme/hello_world/hello_world.o 00:02:12.767 CC examples/nvme/abort/abort.o 00:02:12.767 CC examples/nvme/hotplug/hotplug.o 00:02:12.767 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:12.767 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:12.767 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.026 LINK dif 00:02:13.026 LINK iscsi_fuzz 00:02:13.026 CC examples/accel/perf/accel_perf.o 00:02:13.026 CC examples/blob/hello_world/hello_blob.o 00:02:13.026 CC examples/blob/cli/blobcli.o 00:02:13.026 LINK hello_world 00:02:13.026 LINK cmb_copy 00:02:13.026 LINK pmr_persistence 00:02:13.026 LINK hotplug 00:02:13.026 LINK arbitration 00:02:13.286 LINK reconnect 00:02:13.286 LINK abort 00:02:13.286 LINK hello_blob 00:02:13.286 LINK accel_perf 00:02:13.286 LINK nvme_manage 00:02:13.547 LINK blobcli 00:02:13.547 CC test/bdev/bdevio/bdevio.o 00:02:13.547 LINK cuse 00:02:13.808 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.808 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.808 LINK bdevio 00:02:14.068 LINK hello_bdev 00:02:14.643 LINK bdevperf 00:02:15.215 CC examples/nvmf/nvmf/nvmf.o 00:02:15.475 LINK nvmf 00:02:16.862 LINK esnap 00:02:17.124 00:02:17.124 real 0m51.467s 00:02:17.124 user 6m32.808s 00:02:17.124 sys 4m12.721s 00:02:17.124 00:13:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:17.124 00:13:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:17.124 ************************************ 00:02:17.124 END TEST make 00:02:17.124 ************************************ 00:02:17.124 00:13:30 -- common/autotest_common.sh@1142 -- $ return 0 00:02:17.124 00:13:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:17.124 00:13:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:17.124 00:13:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:17.124 00:13:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.124 00:13:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:17.124 00:13:30 -- pm/common@44 -- $ pid=732390 00:02:17.124 00:13:30 -- pm/common@50 -- $ kill -TERM 732390 00:02:17.124 00:13:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.124 00:13:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:17.124 00:13:30 -- pm/common@44 -- $ pid=732391 00:02:17.124 00:13:30 -- pm/common@50 -- $ kill -TERM 732391 00:02:17.124 00:13:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.124 00:13:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:17.124 00:13:30 -- pm/common@44 -- $ pid=732393 00:02:17.124 00:13:30 -- pm/common@50 -- $ kill -TERM 732393 00:02:17.124 00:13:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.124 00:13:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:17.124 00:13:30 -- pm/common@44 -- $ pid=732424 00:02:17.124 00:13:30 -- pm/common@50 -- $ sudo -E kill -TERM 732424 00:02:17.386 00:13:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.386 00:13:30 -- nvmf/common.sh@7 -- # uname -s 00:02:17.386 00:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.386 00:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.387 00:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.387 00:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.387 00:13:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.387 00:13:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.387 00:13:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.387 00:13:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.387 00:13:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.387 00:13:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.387 00:13:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:17.387 00:13:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:17.387 00:13:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.387 00:13:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.387 00:13:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:17.387 00:13:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.387 00:13:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:17.387 00:13:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.387 00:13:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.387 00:13:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.387 00:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.387 00:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.387 00:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.387 00:13:30 -- paths/export.sh@5 -- # export PATH 00:02:17.387 00:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.387 00:13:30 -- nvmf/common.sh@47 -- # : 0 00:02:17.387 00:13:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:17.387 00:13:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:17.387 00:13:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.387 00:13:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.387 00:13:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.387 00:13:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:17.387 00:13:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:17.387 00:13:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:17.387 00:13:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.387 00:13:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:17.387 00:13:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:17.387 00:13:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:17.387 00:13:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.387 00:13:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:17.387 00:13:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.387 00:13:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:17.387 00:13:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:17.387 00:13:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:17.387 00:13:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:17.387 00:13:30 -- spdk/autotest.sh@48 -- # udevadm_pid=795582 00:02:17.387 00:13:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:17.387 00:13:30 -- pm/common@17 -- # local monitor 00:02:17.387 00:13:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.387 00:13:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.387 00:13:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.387 00:13:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.387 00:13:30 -- pm/common@21 -- # date +%s 00:02:17.387 00:13:30 -- pm/common@25 -- # sleep 1 00:02:17.387 00:13:30 -- pm/common@21 -- # date +%s 00:02:17.387 00:13:30 -- pm/common@21 -- # date +%s 00:02:17.387 00:13:30 -- pm/common@21 -- # date +%s 00:02:17.387 00:13:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081610 00:02:17.387 00:13:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081610 00:02:17.387 00:13:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081610 00:02:17.387 00:13:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081610 00:02:17.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081610_collect-vmstat.pm.log 00:02:17.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081610_collect-cpu-load.pm.log 00:02:17.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081610_collect-cpu-temp.pm.log 00:02:17.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081610_collect-bmc-pm.bmc.pm.log 00:02:18.330 00:13:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:18.330 00:13:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:18.330 00:13:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:18.330 00:13:31 -- common/autotest_common.sh@10 -- # set +x 00:02:18.330 00:13:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:18.330 00:13:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:18.330 00:13:31 -- common/autotest_common.sh@10 -- # set +x 00:02:18.330 00:13:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:18.330 00:13:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.330 00:13:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.330 00:13:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.330 00:13:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.330 00:13:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:18.330 00:13:31 -- common/autotest_common.sh@1455 -- # uname 00:02:18.330 00:13:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:18.330 00:13:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:18.330 00:13:31 -- common/autotest_common.sh@1475 -- # uname 00:02:18.330 00:13:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:18.331 00:13:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:18.331 00:13:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:18.331 00:13:31 -- spdk/autotest.sh@72 -- # hash lcov 00:02:18.331 00:13:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:18.331 00:13:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:18.331 --rc lcov_branch_coverage=1 00:02:18.331 --rc lcov_function_coverage=1 00:02:18.331 --rc genhtml_branch_coverage=1 00:02:18.331 --rc genhtml_function_coverage=1 00:02:18.331 --rc genhtml_legend=1 00:02:18.331 --rc geninfo_all_blocks=1 00:02:18.331 ' 00:02:18.331 00:13:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:18.331 --rc lcov_branch_coverage=1 00:02:18.331 --rc lcov_function_coverage=1 00:02:18.331 --rc genhtml_branch_coverage=1 00:02:18.331 --rc genhtml_function_coverage=1 00:02:18.331 --rc genhtml_legend=1 00:02:18.331 --rc geninfo_all_blocks=1 00:02:18.331 ' 00:02:18.331 00:13:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:18.331 --rc lcov_branch_coverage=1 00:02:18.331 --rc lcov_function_coverage=1 00:02:18.331 --rc genhtml_branch_coverage=1 00:02:18.331 --rc genhtml_function_coverage=1 00:02:18.331 --rc genhtml_legend=1 00:02:18.331 --rc geninfo_all_blocks=1 00:02:18.331 --no-external' 00:02:18.331 00:13:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:18.331 --rc lcov_branch_coverage=1 00:02:18.331 --rc lcov_function_coverage=1 00:02:18.331 --rc genhtml_branch_coverage=1 00:02:18.331 --rc genhtml_function_coverage=1 00:02:18.331 --rc genhtml_legend=1 00:02:18.331 --rc geninfo_all_blocks=1 00:02:18.331 --no-external' 00:02:18.331 00:13:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:18.592 lcov: LCOV version 1.14 00:02:18.592 00:13:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:22.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:22.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:22.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:22.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:23.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:23.061 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:23.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:23.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:23.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:23.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:41.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:48.409 00:14:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:48.409 00:14:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:48.409 00:14:01 -- common/autotest_common.sh@10 -- # set +x 00:02:48.409 00:14:01 -- spdk/autotest.sh@91 -- # rm -f 00:02:48.409 00:14:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.711 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:51.711 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:51.711 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:51.972 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:51.972 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:51.972 00:14:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:51.972 00:14:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:51.972 00:14:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:51.972 00:14:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:51.972 00:14:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:51.972 00:14:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:51.972 00:14:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:51.972 00:14:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:51.972 00:14:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:51.972 00:14:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:51.972 00:14:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:51.972 00:14:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:51.972 00:14:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:51.972 00:14:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:51.972 00:14:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:51.972 No valid GPT data, bailing 00:02:51.972 00:14:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:51.972 00:14:05 -- scripts/common.sh@391 -- # pt= 00:02:51.972 00:14:05 -- scripts/common.sh@392 -- # return 1 00:02:51.972 00:14:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:51.972 1+0 records in 00:02:51.972 1+0 records out 00:02:51.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043715 s, 240 MB/s 00:02:51.972 00:14:05 -- spdk/autotest.sh@118 -- # sync 00:02:51.972 00:14:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:51.972 00:14:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:51.972 00:14:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:00.117 00:14:12 -- spdk/autotest.sh@124 -- # uname -s 00:03:00.117 00:14:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:00.117 00:14:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.117 00:14:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.117 00:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.117 00:14:12 -- common/autotest_common.sh@10 -- # set +x 00:03:00.117 ************************************ 00:03:00.117 START TEST setup.sh 00:03:00.117 ************************************ 00:03:00.117 00:14:12 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.117 * Looking for test storage... 00:03:00.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.117 00:14:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:00.117 00:14:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:00.117 00:14:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.117 00:14:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.117 00:14:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.117 00:14:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.117 ************************************ 00:03:00.117 START TEST acl 00:03:00.117 ************************************ 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.117 * Looking for test storage... 00:03:00.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.117 00:14:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.117 00:14:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:00.117 00:14:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:00.117 00:14:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:00.117 00:14:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:00.117 00:14:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:00.117 00:14:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:00.117 00:14:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.117 00:14:12 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.321 00:14:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.321 00:14:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:04.321 00:14:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.321 00:14:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:04.321 00:14:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.321 00:14:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:07.623 Hugepages 00:03:07.623 node hugesize free / total 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:03:07.623 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.623 00:14:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.623 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:07.623 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:07.623 00:14:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:07.623 00:14:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:07.624 00:14:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:07.624 00:14:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.624 00:14:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.624 00:14:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.624 ************************************ 00:03:07.624 START TEST denied 00:03:07.624 ************************************ 00:03:07.624 00:14:21 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:07.624 00:14:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:07.624 00:14:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:07.624 00:14:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:07.624 00:14:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.624 00:14:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.830 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.830 00:14:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.126 00:03:17.126 real 0m8.852s 00:03:17.126 user 0m3.055s 00:03:17.126 sys 0m5.129s 00:03:17.126 00:14:30 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.126 00:14:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:17.126 ************************************ 00:03:17.126 END TEST denied 00:03:17.126 ************************************ 00:03:17.126 00:14:30 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:17.126 00:14:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:17.126 00:14:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.126 00:14:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.126 00:14:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.126 ************************************ 00:03:17.126 START TEST allowed 00:03:17.126 ************************************ 00:03:17.126 00:14:30 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:17.126 00:14:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:17.126 00:14:30 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:17.126 00:14:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:17.126 00:14:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.126 00:14:30 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.417 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:22.417 00:14:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:22.417 00:14:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:22.417 00:14:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:22.417 00:14:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.417 00:14:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.625 00:03:26.625 real 0m9.800s 00:03:26.625 user 0m2.972s 00:03:26.625 sys 0m5.156s 00:03:26.625 00:14:39 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.625 00:14:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:26.625 ************************************ 00:03:26.625 END TEST allowed 00:03:26.625 ************************************ 00:03:26.625 00:14:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:26.625 00:03:26.625 real 0m27.095s 00:03:26.625 user 0m9.145s 00:03:26.625 sys 0m15.819s 00:03:26.625 00:14:39 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.625 00:14:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:26.625 ************************************ 00:03:26.625 END TEST acl 00:03:26.625 ************************************ 00:03:26.625 00:14:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:26.625 00:14:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:26.625 00:14:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.625 00:14:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.625 00:14:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.625 ************************************ 00:03:26.625 START TEST hugepages 00:03:26.625 ************************************ 00:03:26.625 00:14:40 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:26.625 * Looking for test storage... 00:03:26.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106379540 kB' 'MemAvailable: 110118308 kB' 'Buffers: 4132 kB' 'Cached: 10674124 kB' 'SwapCached: 0 kB' 'Active: 7621144 kB' 'Inactive: 3701320 kB' 'Active(anon): 7129712 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648064 kB' 'Mapped: 199984 kB' 'Shmem: 6485504 kB' 'KReclaimable: 586960 kB' 'Slab: 1471796 kB' 'SReclaimable: 586960 kB' 'SUnreclaim: 884836 kB' 'KernelStack: 27936 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8756244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238368 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.625 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.626 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.627 00:14:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:26.627 00:14:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.627 00:14:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.627 00:14:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.627 ************************************ 00:03:26.627 START TEST default_setup 00:03:26.627 ************************************ 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.627 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.628 00:14:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.842 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.842 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.842 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.842 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.842 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.842 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.842 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.842 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.843 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108543280 kB' 'MemAvailable: 112282040 kB' 'Buffers: 4132 kB' 'Cached: 10674260 kB' 'SwapCached: 0 kB' 'Active: 7636952 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145520 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663184 kB' 'Mapped: 200240 kB' 'Shmem: 6485640 kB' 'KReclaimable: 586952 kB' 'Slab: 1469852 kB' 'SReclaimable: 586952 kB' 'SUnreclaim: 882900 kB' 'KernelStack: 27936 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8773408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238560 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.843 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108546616 kB' 'MemAvailable: 112285376 kB' 'Buffers: 4132 kB' 'Cached: 10674264 kB' 'SwapCached: 0 kB' 'Active: 7636948 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145516 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663240 kB' 'Mapped: 200232 kB' 'Shmem: 6485644 kB' 'KReclaimable: 586952 kB' 'Slab: 1469356 kB' 'SReclaimable: 586952 kB' 'SUnreclaim: 882404 kB' 'KernelStack: 27968 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8773428 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238512 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.845 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108547224 kB' 'MemAvailable: 112285984 kB' 'Buffers: 4132 kB' 'Cached: 10674280 kB' 'SwapCached: 0 kB' 'Active: 7637008 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145576 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663224 kB' 'Mapped: 200232 kB' 'Shmem: 6485660 kB' 'KReclaimable: 586952 kB' 'Slab: 1469388 kB' 'SReclaimable: 586952 kB' 'SUnreclaim: 882436 kB' 'KernelStack: 27968 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8773448 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238512 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.846 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.847 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.848 nr_hugepages=1024 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.848 resv_hugepages=0 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.848 surplus_hugepages=0 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.848 anon_hugepages=0 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108547696 kB' 'MemAvailable: 112286456 kB' 'Buffers: 4132 kB' 'Cached: 10674304 kB' 'SwapCached: 0 kB' 'Active: 7636900 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145468 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663104 kB' 'Mapped: 200232 kB' 'Shmem: 6485684 kB' 'KReclaimable: 586952 kB' 'Slab: 1469388 kB' 'SReclaimable: 586952 kB' 'SUnreclaim: 882436 kB' 'KernelStack: 27952 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8776320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238512 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.848 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59415568 kB' 'MemUsed: 6243440 kB' 'SwapCached: 0 kB' 'Active: 1477568 kB' 'Inactive: 285896 kB' 'Active(anon): 1319820 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1651864 kB' 'Mapped: 44800 kB' 'AnonPages: 114764 kB' 'Shmem: 1208220 kB' 'KernelStack: 14440 kB' 'PageTables: 3424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329096 kB' 'Slab: 764176 kB' 'SReclaimable: 329096 kB' 'SUnreclaim: 435080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.850 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.851 node0=1024 expecting 1024 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.851 00:03:30.851 real 0m4.154s 00:03:30.851 user 0m1.691s 00:03:30.851 sys 0m2.470s 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.851 00:14:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:30.851 ************************************ 00:03:30.851 END TEST default_setup 00:03:30.851 ************************************ 00:03:30.851 00:14:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.851 00:14:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:30.851 00:14:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.851 00:14:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.851 00:14:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.851 ************************************ 00:03:30.851 START TEST per_node_1G_alloc 00:03:30.851 ************************************ 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.851 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.852 00:14:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.065 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:35.065 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108546152 kB' 'MemAvailable: 112284896 kB' 'Buffers: 4132 kB' 'Cached: 10674420 kB' 'SwapCached: 0 kB' 'Active: 7636120 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144688 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661652 kB' 'Mapped: 199160 kB' 'Shmem: 6485800 kB' 'KReclaimable: 586936 kB' 'Slab: 1469244 kB' 'SReclaimable: 586936 kB' 'SUnreclaim: 882308 kB' 'KernelStack: 27888 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8763964 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238512 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.065 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.066 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108546104 kB' 'MemAvailable: 112284848 kB' 'Buffers: 4132 kB' 'Cached: 10674424 kB' 'SwapCached: 0 kB' 'Active: 7636268 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144836 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662256 kB' 'Mapped: 199056 kB' 'Shmem: 6485804 kB' 'KReclaimable: 586936 kB' 'Slab: 1469188 kB' 'SReclaimable: 586936 kB' 'SUnreclaim: 882252 kB' 'KernelStack: 27920 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8763984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238528 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.067 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108544348 kB' 'MemAvailable: 112283092 kB' 'Buffers: 4132 kB' 'Cached: 10674440 kB' 'SwapCached: 0 kB' 'Active: 7636124 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144692 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662092 kB' 'Mapped: 199056 kB' 'Shmem: 6485820 kB' 'KReclaimable: 586936 kB' 'Slab: 1469188 kB' 'SReclaimable: 586936 kB' 'SUnreclaim: 882252 kB' 'KernelStack: 28032 kB' 'PageTables: 9532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8764008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238608 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.068 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.069 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.070 nr_hugepages=1024 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.070 resv_hugepages=0 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.070 surplus_hugepages=0 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.070 anon_hugepages=0 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.070 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108544480 kB' 'MemAvailable: 112283224 kB' 'Buffers: 4132 kB' 'Cached: 10674460 kB' 'SwapCached: 0 kB' 'Active: 7636208 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144776 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662124 kB' 'Mapped: 199056 kB' 'Shmem: 6485840 kB' 'KReclaimable: 586936 kB' 'Slab: 1469188 kB' 'SReclaimable: 586936 kB' 'SUnreclaim: 882252 kB' 'KernelStack: 27920 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8764028 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238560 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.071 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60459080 kB' 'MemUsed: 5199928 kB' 'SwapCached: 0 kB' 'Active: 1475092 kB' 'Inactive: 285896 kB' 'Active(anon): 1317344 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1651960 kB' 'Mapped: 44020 kB' 'AnonPages: 112184 kB' 'Shmem: 1208316 kB' 'KernelStack: 14344 kB' 'PageTables: 3280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329080 kB' 'Slab: 764060 kB' 'SReclaimable: 329080 kB' 'SUnreclaim: 434980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.072 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.073 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48084620 kB' 'MemUsed: 12595220 kB' 'SwapCached: 0 kB' 'Active: 6161020 kB' 'Inactive: 3415424 kB' 'Active(anon): 5827336 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415424 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9026660 kB' 'Mapped: 155036 kB' 'AnonPages: 549848 kB' 'Shmem: 5277552 kB' 'KernelStack: 13736 kB' 'PageTables: 6148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 257856 kB' 'Slab: 705128 kB' 'SReclaimable: 257856 kB' 'SUnreclaim: 447272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.074 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.075 node0=512 expecting 512 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:35.075 node1=512 expecting 512 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.075 00:03:35.075 real 0m3.970s 00:03:35.075 user 0m1.598s 00:03:35.075 sys 0m2.431s 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.075 00:14:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.075 ************************************ 00:03:35.075 END TEST per_node_1G_alloc 00:03:35.075 ************************************ 00:03:35.075 00:14:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.075 00:14:48 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:35.075 00:14:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.075 00:14:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.075 00:14:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.075 ************************************ 00:03:35.075 START TEST even_2G_alloc 00:03:35.075 ************************************ 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.075 00:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.288 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:39.288 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108546696 kB' 'MemAvailable: 112285432 kB' 'Buffers: 4132 kB' 'Cached: 10674604 kB' 'SwapCached: 0 kB' 'Active: 7637012 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145580 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662900 kB' 'Mapped: 199180 kB' 'Shmem: 6485984 kB' 'KReclaimable: 586928 kB' 'Slab: 1468272 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881344 kB' 'KernelStack: 27808 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8761856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238320 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.288 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.289 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108548744 kB' 'MemAvailable: 112287480 kB' 'Buffers: 4132 kB' 'Cached: 10674624 kB' 'SwapCached: 0 kB' 'Active: 7637060 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145628 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662980 kB' 'Mapped: 199136 kB' 'Shmem: 6486004 kB' 'KReclaimable: 586928 kB' 'Slab: 1468268 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881340 kB' 'KernelStack: 27808 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8762240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238304 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.290 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.291 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.292 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108548132 kB' 'MemAvailable: 112286868 kB' 'Buffers: 4132 kB' 'Cached: 10674640 kB' 'SwapCached: 0 kB' 'Active: 7637064 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145632 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663008 kB' 'Mapped: 199136 kB' 'Shmem: 6486020 kB' 'KReclaimable: 586928 kB' 'Slab: 1468344 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881416 kB' 'KernelStack: 27824 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8762260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238320 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.293 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.294 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.295 nr_hugepages=1024 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.295 resv_hugepages=0 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.295 surplus_hugepages=0 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.295 anon_hugepages=0 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108548276 kB' 'MemAvailable: 112287012 kB' 'Buffers: 4132 kB' 'Cached: 10674680 kB' 'SwapCached: 0 kB' 'Active: 7636752 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145320 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662620 kB' 'Mapped: 199136 kB' 'Shmem: 6486060 kB' 'KReclaimable: 586928 kB' 'Slab: 1468344 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881416 kB' 'KernelStack: 27808 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8762284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238336 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.295 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.296 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60475828 kB' 'MemUsed: 5183180 kB' 'SwapCached: 0 kB' 'Active: 1476968 kB' 'Inactive: 285896 kB' 'Active(anon): 1319220 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1652060 kB' 'Mapped: 44020 kB' 'AnonPages: 114028 kB' 'Shmem: 1208416 kB' 'KernelStack: 14328 kB' 'PageTables: 3256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329072 kB' 'Slab: 763544 kB' 'SReclaimable: 329072 kB' 'SUnreclaim: 434472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.297 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48074840 kB' 'MemUsed: 12605000 kB' 'SwapCached: 0 kB' 'Active: 6160176 kB' 'Inactive: 3415424 kB' 'Active(anon): 5826492 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415424 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9026776 kB' 'Mapped: 155116 kB' 'AnonPages: 549028 kB' 'Shmem: 5277668 kB' 'KernelStack: 13464 kB' 'PageTables: 5732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 257856 kB' 'Slab: 704800 kB' 'SReclaimable: 257856 kB' 'SUnreclaim: 446944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.298 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:39.299 node0=512 expecting 512 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:39.299 node1=512 expecting 512 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:39.299 00:03:39.299 real 0m4.092s 00:03:39.299 user 0m1.656s 00:03:39.299 sys 0m2.505s 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.299 00:14:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:39.299 ************************************ 00:03:39.299 END TEST even_2G_alloc 00:03:39.299 ************************************ 00:03:39.299 00:14:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.299 00:14:52 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:39.299 00:14:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.299 00:14:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.299 00:14:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.299 ************************************ 00:03:39.299 START TEST odd_alloc 00:03:39.299 ************************************ 00:03:39.299 00:14:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:39.299 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:39.299 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:39.299 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.299 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.299 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:39.299 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.300 00:14:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.625 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:43.626 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108543488 kB' 'MemAvailable: 112282224 kB' 'Buffers: 4132 kB' 'Cached: 10674796 kB' 'SwapCached: 0 kB' 'Active: 7639048 kB' 'Inactive: 3701320 kB' 'Active(anon): 7147616 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664568 kB' 'Mapped: 199128 kB' 'Shmem: 6486176 kB' 'KReclaimable: 586928 kB' 'Slab: 1468736 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881808 kB' 'KernelStack: 27936 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8766232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238560 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.626 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108545440 kB' 'MemAvailable: 112284176 kB' 'Buffers: 4132 kB' 'Cached: 10674800 kB' 'SwapCached: 0 kB' 'Active: 7638628 kB' 'Inactive: 3701320 kB' 'Active(anon): 7147196 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664180 kB' 'Mapped: 199096 kB' 'Shmem: 6486180 kB' 'KReclaimable: 586928 kB' 'Slab: 1468708 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881780 kB' 'KernelStack: 27952 kB' 'PageTables: 9572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8766248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238560 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.627 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.628 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108545692 kB' 'MemAvailable: 112284428 kB' 'Buffers: 4132 kB' 'Cached: 10674816 kB' 'SwapCached: 0 kB' 'Active: 7639368 kB' 'Inactive: 3701320 kB' 'Active(anon): 7147936 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664872 kB' 'Mapped: 199600 kB' 'Shmem: 6486196 kB' 'KReclaimable: 586928 kB' 'Slab: 1468732 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881804 kB' 'KernelStack: 27872 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8767728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238544 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.629 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.630 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:43.631 nr_hugepages=1025 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.631 resv_hugepages=0 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.631 surplus_hugepages=0 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.631 anon_hugepages=0 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108538380 kB' 'MemAvailable: 112277116 kB' 'Buffers: 4132 kB' 'Cached: 10674836 kB' 'SwapCached: 0 kB' 'Active: 7643772 kB' 'Inactive: 3701320 kB' 'Active(anon): 7152340 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669840 kB' 'Mapped: 199600 kB' 'Shmem: 6486216 kB' 'KReclaimable: 586928 kB' 'Slab: 1468732 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 881804 kB' 'KernelStack: 28000 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8772408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238516 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.631 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.632 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60475088 kB' 'MemUsed: 5183920 kB' 'SwapCached: 0 kB' 'Active: 1479220 kB' 'Inactive: 285896 kB' 'Active(anon): 1321472 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1652136 kB' 'Mapped: 44020 kB' 'AnonPages: 116140 kB' 'Shmem: 1208492 kB' 'KernelStack: 14504 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329072 kB' 'Slab: 763912 kB' 'SReclaimable: 329072 kB' 'SUnreclaim: 434840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.633 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48066228 kB' 'MemUsed: 12613612 kB' 'SwapCached: 0 kB' 'Active: 6159800 kB' 'Inactive: 3415424 kB' 'Active(anon): 5826116 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415424 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9026876 kB' 'Mapped: 155076 kB' 'AnonPages: 548400 kB' 'Shmem: 5277768 kB' 'KernelStack: 13448 kB' 'PageTables: 5664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 257856 kB' 'Slab: 704820 kB' 'SReclaimable: 257856 kB' 'SUnreclaim: 446964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.634 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:43.635 node0=512 expecting 513 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:43.635 node1=513 expecting 512 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:43.635 00:03:43.635 real 0m4.039s 00:03:43.635 user 0m1.640s 00:03:43.635 sys 0m2.463s 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.635 00:14:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.635 ************************************ 00:03:43.635 END TEST odd_alloc 00:03:43.635 ************************************ 00:03:43.635 00:14:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:43.635 00:14:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:43.635 00:14:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.635 00:14:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.635 00:14:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.635 ************************************ 00:03:43.635 START TEST custom_alloc 00:03:43.635 ************************************ 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:43.635 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.636 00:14:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.938 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.939 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.939 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107458496 kB' 'MemAvailable: 111197232 kB' 'Buffers: 4132 kB' 'Cached: 10674972 kB' 'SwapCached: 0 kB' 'Active: 7645516 kB' 'Inactive: 3701320 kB' 'Active(anon): 7154084 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670800 kB' 'Mapped: 199652 kB' 'Shmem: 6486352 kB' 'KReclaimable: 586928 kB' 'Slab: 1469164 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882236 kB' 'KernelStack: 27920 kB' 'PageTables: 9400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8773420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238336 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.205 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107454212 kB' 'MemAvailable: 111192948 kB' 'Buffers: 4132 kB' 'Cached: 10674972 kB' 'SwapCached: 0 kB' 'Active: 7641672 kB' 'Inactive: 3701320 kB' 'Active(anon): 7150240 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666956 kB' 'Mapped: 199132 kB' 'Shmem: 6486352 kB' 'KReclaimable: 586928 kB' 'Slab: 1469164 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882236 kB' 'KernelStack: 27888 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8767320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238304 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.206 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107452300 kB' 'MemAvailable: 111191036 kB' 'Buffers: 4132 kB' 'Cached: 10674992 kB' 'SwapCached: 0 kB' 'Active: 7646352 kB' 'Inactive: 3701320 kB' 'Active(anon): 7154920 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 672156 kB' 'Mapped: 199632 kB' 'Shmem: 6486372 kB' 'KReclaimable: 586928 kB' 'Slab: 1469172 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882244 kB' 'KernelStack: 27904 kB' 'PageTables: 9344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8774792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238308 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.207 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:47.208 nr_hugepages=1536 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.208 resv_hugepages=0 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.208 surplus_hugepages=0 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.208 anon_hugepages=0 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107452260 kB' 'MemAvailable: 111190996 kB' 'Buffers: 4132 kB' 'Cached: 10675016 kB' 'SwapCached: 0 kB' 'Active: 7642920 kB' 'Inactive: 3701320 kB' 'Active(anon): 7151488 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669260 kB' 'Mapped: 200044 kB' 'Shmem: 6486396 kB' 'KReclaimable: 586928 kB' 'Slab: 1469172 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882244 kB' 'KernelStack: 27936 kB' 'PageTables: 9472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8790204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238320 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.208 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60464276 kB' 'MemUsed: 5194732 kB' 'SwapCached: 0 kB' 'Active: 1481028 kB' 'Inactive: 285896 kB' 'Active(anon): 1323280 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1652156 kB' 'Mapped: 44784 kB' 'AnonPages: 118216 kB' 'Shmem: 1208512 kB' 'KernelStack: 14296 kB' 'PageTables: 3184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329072 kB' 'Slab: 764144 kB' 'SReclaimable: 329072 kB' 'SUnreclaim: 435072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.209 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 46992648 kB' 'MemUsed: 13687192 kB' 'SwapCached: 0 kB' 'Active: 6163028 kB' 'Inactive: 3415424 kB' 'Active(anon): 5829344 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415424 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9027008 kB' 'Mapped: 155108 kB' 'AnonPages: 551632 kB' 'Shmem: 5277900 kB' 'KernelStack: 13464 kB' 'PageTables: 5684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 257856 kB' 'Slab: 705028 kB' 'SReclaimable: 257856 kB' 'SUnreclaim: 447172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.210 node0=512 expecting 512 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:47.210 node1=1024 expecting 1024 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:47.210 00:03:47.210 real 0m3.992s 00:03:47.210 user 0m1.587s 00:03:47.210 sys 0m2.468s 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.210 00:15:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.210 ************************************ 00:03:47.210 END TEST custom_alloc 00:03:47.210 ************************************ 00:03:47.210 00:15:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.210 00:15:00 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:47.210 00:15:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.210 00:15:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.210 00:15:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.470 ************************************ 00:03:47.471 START TEST no_shrink_alloc 00:03:47.471 ************************************ 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.471 00:15:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.684 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.684 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108490416 kB' 'MemAvailable: 112229152 kB' 'Buffers: 4132 kB' 'Cached: 10675164 kB' 'SwapCached: 0 kB' 'Active: 7648716 kB' 'Inactive: 3701320 kB' 'Active(anon): 7157284 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674116 kB' 'Mapped: 200096 kB' 'Shmem: 6486544 kB' 'KReclaimable: 586928 kB' 'Slab: 1469680 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882752 kB' 'KernelStack: 27952 kB' 'PageTables: 9552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8776588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238404 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108490416 kB' 'MemAvailable: 112229152 kB' 'Buffers: 4132 kB' 'Cached: 10675168 kB' 'SwapCached: 0 kB' 'Active: 7647832 kB' 'Inactive: 3701320 kB' 'Active(anon): 7156400 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 673228 kB' 'Mapped: 200016 kB' 'Shmem: 6486548 kB' 'KReclaimable: 586928 kB' 'Slab: 1469636 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882708 kB' 'KernelStack: 27936 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8776476 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238388 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.687 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108493868 kB' 'MemAvailable: 112232604 kB' 'Buffers: 4132 kB' 'Cached: 10675184 kB' 'SwapCached: 0 kB' 'Active: 7648116 kB' 'Inactive: 3701320 kB' 'Active(anon): 7156684 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 673492 kB' 'Mapped: 200016 kB' 'Shmem: 6486564 kB' 'KReclaimable: 586928 kB' 'Slab: 1469664 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882736 kB' 'KernelStack: 27824 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8776624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238372 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.688 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.689 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.690 nr_hugepages=1024 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.690 resv_hugepages=0 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.690 surplus_hugepages=0 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.690 anon_hugepages=0 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108494936 kB' 'MemAvailable: 112233672 kB' 'Buffers: 4132 kB' 'Cached: 10675224 kB' 'SwapCached: 0 kB' 'Active: 7647892 kB' 'Inactive: 3701320 kB' 'Active(anon): 7156460 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 673156 kB' 'Mapped: 200016 kB' 'Shmem: 6486604 kB' 'KReclaimable: 586928 kB' 'Slab: 1469664 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882736 kB' 'KernelStack: 27872 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8778012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238404 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.690 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.691 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59426164 kB' 'MemUsed: 6232844 kB' 'SwapCached: 0 kB' 'Active: 1477620 kB' 'Inactive: 285896 kB' 'Active(anon): 1319872 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1652172 kB' 'Mapped: 44736 kB' 'AnonPages: 114532 kB' 'Shmem: 1208528 kB' 'KernelStack: 14520 kB' 'PageTables: 3284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329072 kB' 'Slab: 764372 kB' 'SReclaimable: 329072 kB' 'SUnreclaim: 435300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.692 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.693 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.694 node0=1024 expecting 1024 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.694 00:15:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.909 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.909 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.909 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.909 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108560592 kB' 'MemAvailable: 112299328 kB' 'Buffers: 4132 kB' 'Cached: 10675316 kB' 'SwapCached: 0 kB' 'Active: 7650468 kB' 'Inactive: 3701320 kB' 'Active(anon): 7159036 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 675620 kB' 'Mapped: 200576 kB' 'Shmem: 6486696 kB' 'KReclaimable: 586928 kB' 'Slab: 1469496 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882568 kB' 'KernelStack: 28080 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8779148 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238580 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.910 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.911 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108560040 kB' 'MemAvailable: 112298776 kB' 'Buffers: 4132 kB' 'Cached: 10675320 kB' 'SwapCached: 0 kB' 'Active: 7650240 kB' 'Inactive: 3701320 kB' 'Active(anon): 7158808 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 675500 kB' 'Mapped: 200564 kB' 'Shmem: 6486700 kB' 'KReclaimable: 586928 kB' 'Slab: 1469492 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882564 kB' 'KernelStack: 28032 kB' 'PageTables: 9480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8779412 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238532 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.912 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.913 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108568332 kB' 'MemAvailable: 112307068 kB' 'Buffers: 4132 kB' 'Cached: 10675340 kB' 'SwapCached: 0 kB' 'Active: 7644568 kB' 'Inactive: 3701320 kB' 'Active(anon): 7153136 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669940 kB' 'Mapped: 199160 kB' 'Shmem: 6486720 kB' 'KReclaimable: 586928 kB' 'Slab: 1469556 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882628 kB' 'KernelStack: 28016 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8771240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238496 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.914 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.915 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.916 nr_hugepages=1024 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.916 resv_hugepages=0 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.916 surplus_hugepages=0 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.916 anon_hugepages=0 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108567468 kB' 'MemAvailable: 112306204 kB' 'Buffers: 4132 kB' 'Cached: 10675360 kB' 'SwapCached: 0 kB' 'Active: 7644772 kB' 'Inactive: 3701320 kB' 'Active(anon): 7153340 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670592 kB' 'Mapped: 199664 kB' 'Shmem: 6486740 kB' 'KReclaimable: 586928 kB' 'Slab: 1469472 kB' 'SReclaimable: 586928 kB' 'SUnreclaim: 882544 kB' 'KernelStack: 27968 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8771860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238448 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.916 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.917 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.918 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59485800 kB' 'MemUsed: 6173208 kB' 'SwapCached: 0 kB' 'Active: 1485380 kB' 'Inactive: 285896 kB' 'Active(anon): 1327632 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1652216 kB' 'Mapped: 44524 kB' 'AnonPages: 122744 kB' 'Shmem: 1208572 kB' 'KernelStack: 14344 kB' 'PageTables: 3328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329072 kB' 'Slab: 764000 kB' 'SReclaimable: 329072 kB' 'SUnreclaim: 434928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.919 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.920 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.921 node0=1024 expecting 1024 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.921 00:03:55.921 real 0m8.164s 00:03:55.921 user 0m3.150s 00:03:55.921 sys 0m5.153s 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.921 00:15:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.921 ************************************ 00:03:55.921 END TEST no_shrink_alloc 00:03:55.921 ************************************ 00:03:55.921 00:15:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.921 00:15:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.921 00:03:55.921 real 0m29.051s 00:03:55.921 user 0m11.575s 00:03:55.921 sys 0m17.912s 00:03:55.921 00:15:09 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.921 00:15:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.921 ************************************ 00:03:55.921 END TEST hugepages 00:03:55.921 ************************************ 00:03:55.921 00:15:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.921 00:15:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:55.921 00:15:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.921 00:15:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.921 00:15:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.921 ************************************ 00:03:55.921 START TEST driver 00:03:55.921 ************************************ 00:03:55.921 00:15:09 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:55.921 * Looking for test storage... 00:03:55.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.921 00:15:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:55.921 00:15:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.921 00:15:09 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.211 00:15:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:01.211 00:15:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.211 00:15:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.211 00:15:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.211 ************************************ 00:04:01.211 START TEST guess_driver 00:04:01.211 ************************************ 00:04:01.211 00:15:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:01.211 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:01.211 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:01.211 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:01.211 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:01.211 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:01.212 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:01.212 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:01.212 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:01.212 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:01.212 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:01.212 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:01.212 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:01.212 Looking for driver=vfio-pci 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.212 00:15:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.512 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.773 00:15:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.068 00:04:10.068 real 0m9.039s 00:04:10.068 user 0m2.935s 00:04:10.068 sys 0m5.355s 00:04:10.068 00:15:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.068 00:15:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.068 ************************************ 00:04:10.068 END TEST guess_driver 00:04:10.068 ************************************ 00:04:10.068 00:15:23 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:10.068 00:04:10.068 real 0m14.280s 00:04:10.068 user 0m4.525s 00:04:10.068 sys 0m8.293s 00:04:10.068 00:15:23 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.068 00:15:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.068 ************************************ 00:04:10.068 END TEST driver 00:04:10.068 ************************************ 00:04:10.068 00:15:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.068 00:15:23 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:10.068 00:15:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.068 00:15:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.068 00:15:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.068 ************************************ 00:04:10.068 START TEST devices 00:04:10.068 ************************************ 00:04:10.068 00:15:23 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:10.068 * Looking for test storage... 00:04:10.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.068 00:15:23 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:10.068 00:15:23 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:10.068 00:15:23 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.068 00:15:23 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.278 00:15:27 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:14.278 00:15:27 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:14.279 00:15:27 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:14.279 00:15:27 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:14.279 No valid GPT data, bailing 00:04:14.279 00:15:27 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.279 00:15:27 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:14.279 00:15:27 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:14.279 00:15:27 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:14.279 00:15:27 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:14.279 00:15:27 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:14.279 00:15:27 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:14.279 00:15:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.279 00:15:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.279 00:15:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:14.279 ************************************ 00:04:14.279 START TEST nvme_mount 00:04:14.279 ************************************ 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:14.279 00:15:27 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:15.665 Creating new GPT entries in memory. 00:04:15.665 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.665 other utilities. 00:04:15.665 00:15:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.665 00:15:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.665 00:15:28 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.665 00:15:28 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.665 00:15:28 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:16.610 Creating new GPT entries in memory. 00:04:16.610 The operation has completed successfully. 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 839291 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.610 00:15:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:19.914 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.914 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:19.914 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:19.914 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:19.914 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:19.914 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.915 00:15:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.120 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.421 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.422 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.683 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.683 00:04:27.683 real 0m13.273s 00:04:27.683 user 0m3.963s 00:04:27.683 sys 0m7.080s 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.683 00:15:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:27.683 ************************************ 00:04:27.683 END TEST nvme_mount 00:04:27.683 ************************************ 00:04:27.683 00:15:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:27.683 00:15:41 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:27.683 00:15:41 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.683 00:15:41 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.683 00:15:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.683 ************************************ 00:04:27.683 START TEST dm_mount 00:04:27.683 ************************************ 00:04:27.683 00:15:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:27.683 00:15:41 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:27.683 00:15:41 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:27.683 00:15:41 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:27.683 00:15:41 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.684 00:15:41 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:28.732 Creating new GPT entries in memory. 00:04:28.732 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.732 other utilities. 00:04:28.732 00:15:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.732 00:15:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.732 00:15:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.732 00:15:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.732 00:15:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:29.673 Creating new GPT entries in memory. 00:04:29.673 The operation has completed successfully. 00:04:29.673 00:15:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:29.673 00:15:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.673 00:15:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.673 00:15:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.673 00:15:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:31.056 The operation has completed successfully. 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 844705 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.056 00:15:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.356 00:15:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.617 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.824 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:38.825 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:38.825 00:04:38.825 real 0m10.744s 00:04:38.825 user 0m2.870s 00:04:38.825 sys 0m4.940s 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.825 00:15:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 ************************************ 00:04:38.825 END TEST dm_mount 00:04:38.825 ************************************ 00:04:38.825 00:15:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:38.825 00:15:51 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:38.825 00:15:51 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:38.825 00:15:51 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.825 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.825 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.825 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.825 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.825 00:15:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:38.825 00:04:38.825 real 0m28.789s 00:04:38.825 user 0m8.511s 00:04:38.825 sys 0m14.997s 00:04:38.825 00:15:52 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.825 00:15:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 ************************************ 00:04:38.825 END TEST devices 00:04:38.825 ************************************ 00:04:38.825 00:15:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:38.825 00:04:38.825 real 1m39.630s 00:04:38.825 user 0m33.916s 00:04:38.825 sys 0m57.295s 00:04:38.825 00:15:52 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.825 00:15:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 ************************************ 00:04:38.825 END TEST setup.sh 00:04:38.825 ************************************ 00:04:38.825 00:15:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.825 00:15:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:42.120 Hugepages 00:04:42.120 node hugesize free / total 00:04:42.120 node0 1048576kB 0 / 0 00:04:42.120 node0 2048kB 2048 / 2048 00:04:42.120 node1 1048576kB 0 / 0 00:04:42.120 node1 2048kB 0 / 0 00:04:42.120 00:04:42.120 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.120 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:42.120 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:42.120 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:42.120 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:42.120 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:42.120 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:42.120 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:42.120 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:42.120 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:42.120 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:42.120 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:42.120 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:42.120 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:42.120 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:42.120 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:42.120 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:42.120 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:42.120 00:15:55 -- spdk/autotest.sh@130 -- # uname -s 00:04:42.120 00:15:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:42.120 00:15:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:42.120 00:15:55 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.325 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:46.325 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:47.709 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:47.970 00:16:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:48.913 00:16:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:48.913 00:16:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:48.913 00:16:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:48.913 00:16:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:48.913 00:16:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:48.913 00:16:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:48.913 00:16:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.913 00:16:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.913 00:16:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:48.913 00:16:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:48.913 00:16:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:48.913 00:16:02 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.117 Waiting for block devices as requested 00:04:53.117 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:53.117 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:53.377 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:53.377 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:53.377 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:53.637 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:53.637 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:53.637 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:53.637 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:53.898 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:53.898 00:16:07 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:53.898 00:16:07 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:53.898 00:16:07 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:53.898 00:16:07 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:53.898 00:16:07 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:53.898 00:16:07 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:53.898 00:16:07 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:53.898 00:16:07 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:53.898 00:16:07 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:53.898 00:16:07 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:53.898 00:16:07 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:53.898 00:16:07 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:53.898 00:16:07 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:53.898 00:16:07 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:53.898 00:16:07 -- common/autotest_common.sh@1557 -- # continue 00:04:53.898 00:16:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:53.898 00:16:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.898 00:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.898 00:16:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:53.898 00:16:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.898 00:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.898 00:16:07 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.103 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.104 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:58.104 00:16:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:58.104 00:16:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.104 00:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:58.104 00:16:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:58.104 00:16:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:58.104 00:16:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.104 00:16:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:58.104 00:16:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:58.104 00:16:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:58.104 00:16:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.104 00:16:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.104 00:16:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.104 00:16:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.104 00:16:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.104 00:16:11 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.104 00:16:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:58.104 00:16:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:58.104 00:16:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:58.104 00:16:11 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:58.104 00:16:11 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:58.104 00:16:11 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:58.104 00:16:11 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:58.104 00:16:11 -- common/autotest_common.sh@1593 -- # return 0 00:04:58.104 00:16:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:58.104 00:16:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:58.104 00:16:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:58.104 00:16:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:58.104 00:16:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:58.104 00:16:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.104 00:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:58.104 00:16:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:58.104 00:16:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:58.104 00:16:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.104 00:16:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.104 00:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:58.104 ************************************ 00:04:58.104 START TEST env 00:04:58.104 ************************************ 00:04:58.104 00:16:11 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:58.104 * Looking for test storage... 00:04:58.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:58.104 00:16:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:58.104 00:16:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.104 00:16:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.104 00:16:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.104 ************************************ 00:04:58.104 START TEST env_memory 00:04:58.104 ************************************ 00:04:58.104 00:16:11 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:58.104 00:04:58.104 00:04:58.104 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.104 http://cunit.sourceforge.net/ 00:04:58.104 00:04:58.104 00:04:58.104 Suite: memory 00:04:58.104 Test: alloc and free memory map ...[2024-07-16 00:16:11.694640] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.104 passed 00:04:58.104 Test: mem map translation ...[2024-07-16 00:16:11.720297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.104 [2024-07-16 00:16:11.720331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.104 [2024-07-16 00:16:11.720380] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.104 [2024-07-16 00:16:11.720388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.365 passed 00:04:58.365 Test: mem map registration ...[2024-07-16 00:16:11.775785] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:58.365 [2024-07-16 00:16:11.775811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:58.365 passed 00:04:58.365 Test: mem map adjacent registrations ...passed 00:04:58.365 00:04:58.365 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.365 suites 1 1 n/a 0 0 00:04:58.365 tests 4 4 4 0 0 00:04:58.365 asserts 152 152 152 0 n/a 00:04:58.365 00:04:58.365 Elapsed time = 0.195 seconds 00:04:58.365 00:04:58.365 real 0m0.210s 00:04:58.365 user 0m0.199s 00:04:58.365 sys 0m0.010s 00:04:58.365 00:16:11 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.365 00:16:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:58.365 ************************************ 00:04:58.365 END TEST env_memory 00:04:58.365 ************************************ 00:04:58.365 00:16:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.365 00:16:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.365 00:16:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.365 00:16:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.365 00:16:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.365 ************************************ 00:04:58.365 START TEST env_vtophys 00:04:58.365 ************************************ 00:04:58.365 00:16:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.365 EAL: lib.eal log level changed from notice to debug 00:04:58.366 EAL: Detected lcore 0 as core 0 on socket 0 00:04:58.366 EAL: Detected lcore 1 as core 1 on socket 0 00:04:58.366 EAL: Detected lcore 2 as core 2 on socket 0 00:04:58.366 EAL: Detected lcore 3 as core 3 on socket 0 00:04:58.366 EAL: Detected lcore 4 as core 4 on socket 0 00:04:58.366 EAL: Detected lcore 5 as core 5 on socket 0 00:04:58.366 EAL: Detected lcore 6 as core 6 on socket 0 00:04:58.366 EAL: Detected lcore 7 as core 7 on socket 0 00:04:58.366 EAL: Detected lcore 8 as core 8 on socket 0 00:04:58.366 EAL: Detected lcore 9 as core 9 on socket 0 00:04:58.366 EAL: Detected lcore 10 as core 10 on socket 0 00:04:58.366 EAL: Detected lcore 11 as core 11 on socket 0 00:04:58.366 EAL: Detected lcore 12 as core 12 on socket 0 00:04:58.366 EAL: Detected lcore 13 as core 13 on socket 0 00:04:58.366 EAL: Detected lcore 14 as core 14 on socket 0 00:04:58.366 EAL: Detected lcore 15 as core 15 on socket 0 00:04:58.366 EAL: Detected lcore 16 as core 16 on socket 0 00:04:58.366 EAL: Detected lcore 17 as core 17 on socket 0 00:04:58.366 EAL: Detected lcore 18 as core 18 on socket 0 00:04:58.366 EAL: Detected lcore 19 as core 19 on socket 0 00:04:58.366 EAL: Detected lcore 20 as core 20 on socket 0 00:04:58.366 EAL: Detected lcore 21 as core 21 on socket 0 00:04:58.366 EAL: Detected lcore 22 as core 22 on socket 0 00:04:58.366 EAL: Detected lcore 23 as core 23 on socket 0 00:04:58.366 EAL: Detected lcore 24 as core 24 on socket 0 00:04:58.366 EAL: Detected lcore 25 as core 25 on socket 0 00:04:58.366 EAL: Detected lcore 26 as core 26 on socket 0 00:04:58.366 EAL: Detected lcore 27 as core 27 on socket 0 00:04:58.366 EAL: Detected lcore 28 as core 28 on socket 0 00:04:58.366 EAL: Detected lcore 29 as core 29 on socket 0 00:04:58.366 EAL: Detected lcore 30 as core 30 on socket 0 00:04:58.366 EAL: Detected lcore 31 as core 31 on socket 0 00:04:58.366 EAL: Detected lcore 32 as core 32 on socket 0 00:04:58.366 EAL: Detected lcore 33 as core 33 on socket 0 00:04:58.366 EAL: Detected lcore 34 as core 34 on socket 0 00:04:58.366 EAL: Detected lcore 35 as core 35 on socket 0 00:04:58.366 EAL: Detected lcore 36 as core 0 on socket 1 00:04:58.366 EAL: Detected lcore 37 as core 1 on socket 1 00:04:58.366 EAL: Detected lcore 38 as core 2 on socket 1 00:04:58.366 EAL: Detected lcore 39 as core 3 on socket 1 00:04:58.366 EAL: Detected lcore 40 as core 4 on socket 1 00:04:58.366 EAL: Detected lcore 41 as core 5 on socket 1 00:04:58.366 EAL: Detected lcore 42 as core 6 on socket 1 00:04:58.366 EAL: Detected lcore 43 as core 7 on socket 1 00:04:58.366 EAL: Detected lcore 44 as core 8 on socket 1 00:04:58.366 EAL: Detected lcore 45 as core 9 on socket 1 00:04:58.366 EAL: Detected lcore 46 as core 10 on socket 1 00:04:58.366 EAL: Detected lcore 47 as core 11 on socket 1 00:04:58.366 EAL: Detected lcore 48 as core 12 on socket 1 00:04:58.366 EAL: Detected lcore 49 as core 13 on socket 1 00:04:58.366 EAL: Detected lcore 50 as core 14 on socket 1 00:04:58.366 EAL: Detected lcore 51 as core 15 on socket 1 00:04:58.366 EAL: Detected lcore 52 as core 16 on socket 1 00:04:58.366 EAL: Detected lcore 53 as core 17 on socket 1 00:04:58.366 EAL: Detected lcore 54 as core 18 on socket 1 00:04:58.366 EAL: Detected lcore 55 as core 19 on socket 1 00:04:58.366 EAL: Detected lcore 56 as core 20 on socket 1 00:04:58.366 EAL: Detected lcore 57 as core 21 on socket 1 00:04:58.366 EAL: Detected lcore 58 as core 22 on socket 1 00:04:58.366 EAL: Detected lcore 59 as core 23 on socket 1 00:04:58.366 EAL: Detected lcore 60 as core 24 on socket 1 00:04:58.366 EAL: Detected lcore 61 as core 25 on socket 1 00:04:58.366 EAL: Detected lcore 62 as core 26 on socket 1 00:04:58.366 EAL: Detected lcore 63 as core 27 on socket 1 00:04:58.366 EAL: Detected lcore 64 as core 28 on socket 1 00:04:58.366 EAL: Detected lcore 65 as core 29 on socket 1 00:04:58.366 EAL: Detected lcore 66 as core 30 on socket 1 00:04:58.366 EAL: Detected lcore 67 as core 31 on socket 1 00:04:58.366 EAL: Detected lcore 68 as core 32 on socket 1 00:04:58.366 EAL: Detected lcore 69 as core 33 on socket 1 00:04:58.366 EAL: Detected lcore 70 as core 34 on socket 1 00:04:58.366 EAL: Detected lcore 71 as core 35 on socket 1 00:04:58.366 EAL: Detected lcore 72 as core 0 on socket 0 00:04:58.366 EAL: Detected lcore 73 as core 1 on socket 0 00:04:58.366 EAL: Detected lcore 74 as core 2 on socket 0 00:04:58.366 EAL: Detected lcore 75 as core 3 on socket 0 00:04:58.366 EAL: Detected lcore 76 as core 4 on socket 0 00:04:58.366 EAL: Detected lcore 77 as core 5 on socket 0 00:04:58.366 EAL: Detected lcore 78 as core 6 on socket 0 00:04:58.366 EAL: Detected lcore 79 as core 7 on socket 0 00:04:58.366 EAL: Detected lcore 80 as core 8 on socket 0 00:04:58.366 EAL: Detected lcore 81 as core 9 on socket 0 00:04:58.366 EAL: Detected lcore 82 as core 10 on socket 0 00:04:58.366 EAL: Detected lcore 83 as core 11 on socket 0 00:04:58.366 EAL: Detected lcore 84 as core 12 on socket 0 00:04:58.366 EAL: Detected lcore 85 as core 13 on socket 0 00:04:58.366 EAL: Detected lcore 86 as core 14 on socket 0 00:04:58.366 EAL: Detected lcore 87 as core 15 on socket 0 00:04:58.366 EAL: Detected lcore 88 as core 16 on socket 0 00:04:58.366 EAL: Detected lcore 89 as core 17 on socket 0 00:04:58.366 EAL: Detected lcore 90 as core 18 on socket 0 00:04:58.366 EAL: Detected lcore 91 as core 19 on socket 0 00:04:58.366 EAL: Detected lcore 92 as core 20 on socket 0 00:04:58.366 EAL: Detected lcore 93 as core 21 on socket 0 00:04:58.366 EAL: Detected lcore 94 as core 22 on socket 0 00:04:58.366 EAL: Detected lcore 95 as core 23 on socket 0 00:04:58.366 EAL: Detected lcore 96 as core 24 on socket 0 00:04:58.366 EAL: Detected lcore 97 as core 25 on socket 0 00:04:58.366 EAL: Detected lcore 98 as core 26 on socket 0 00:04:58.366 EAL: Detected lcore 99 as core 27 on socket 0 00:04:58.366 EAL: Detected lcore 100 as core 28 on socket 0 00:04:58.366 EAL: Detected lcore 101 as core 29 on socket 0 00:04:58.366 EAL: Detected lcore 102 as core 30 on socket 0 00:04:58.366 EAL: Detected lcore 103 as core 31 on socket 0 00:04:58.366 EAL: Detected lcore 104 as core 32 on socket 0 00:04:58.366 EAL: Detected lcore 105 as core 33 on socket 0 00:04:58.366 EAL: Detected lcore 106 as core 34 on socket 0 00:04:58.366 EAL: Detected lcore 107 as core 35 on socket 0 00:04:58.366 EAL: Detected lcore 108 as core 0 on socket 1 00:04:58.366 EAL: Detected lcore 109 as core 1 on socket 1 00:04:58.366 EAL: Detected lcore 110 as core 2 on socket 1 00:04:58.366 EAL: Detected lcore 111 as core 3 on socket 1 00:04:58.366 EAL: Detected lcore 112 as core 4 on socket 1 00:04:58.366 EAL: Detected lcore 113 as core 5 on socket 1 00:04:58.366 EAL: Detected lcore 114 as core 6 on socket 1 00:04:58.366 EAL: Detected lcore 115 as core 7 on socket 1 00:04:58.366 EAL: Detected lcore 116 as core 8 on socket 1 00:04:58.366 EAL: Detected lcore 117 as core 9 on socket 1 00:04:58.366 EAL: Detected lcore 118 as core 10 on socket 1 00:04:58.366 EAL: Detected lcore 119 as core 11 on socket 1 00:04:58.366 EAL: Detected lcore 120 as core 12 on socket 1 00:04:58.366 EAL: Detected lcore 121 as core 13 on socket 1 00:04:58.366 EAL: Detected lcore 122 as core 14 on socket 1 00:04:58.366 EAL: Detected lcore 123 as core 15 on socket 1 00:04:58.366 EAL: Detected lcore 124 as core 16 on socket 1 00:04:58.366 EAL: Detected lcore 125 as core 17 on socket 1 00:04:58.366 EAL: Detected lcore 126 as core 18 on socket 1 00:04:58.366 EAL: Detected lcore 127 as core 19 on socket 1 00:04:58.366 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:58.366 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:58.366 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:58.366 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:58.366 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:58.366 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:58.366 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:58.366 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:58.366 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:58.366 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:58.366 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:58.366 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:58.366 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:58.366 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:58.366 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:58.366 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:58.366 EAL: Maximum logical cores by configuration: 128 00:04:58.366 EAL: Detected CPU lcores: 128 00:04:58.366 EAL: Detected NUMA nodes: 2 00:04:58.366 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:58.366 EAL: Detected shared linkage of DPDK 00:04:58.366 EAL: No shared files mode enabled, IPC will be disabled 00:04:58.366 EAL: Bus pci wants IOVA as 'DC' 00:04:58.366 EAL: Buses did not request a specific IOVA mode. 00:04:58.366 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:58.366 EAL: Selected IOVA mode 'VA' 00:04:58.366 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.366 EAL: Probing VFIO support... 00:04:58.366 EAL: IOMMU type 1 (Type 1) is supported 00:04:58.366 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:58.366 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:58.366 EAL: VFIO support initialized 00:04:58.366 EAL: Ask a virtual area of 0x2e000 bytes 00:04:58.366 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:58.366 EAL: Setting up physically contiguous memory... 00:04:58.366 EAL: Setting maximum number of open files to 524288 00:04:58.366 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:58.366 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:58.366 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:58.366 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.366 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:58.366 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.366 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.366 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:58.366 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:58.366 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.366 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:58.366 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.366 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.366 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:58.366 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:58.366 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.366 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:58.366 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.366 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.366 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:58.366 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:58.366 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.366 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:58.366 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.366 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.366 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:58.366 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:58.366 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:58.366 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.366 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:58.366 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.366 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.366 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:58.367 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:58.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.367 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:58.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.367 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:58.367 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:58.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.367 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:58.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.367 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:58.367 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:58.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.367 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:58.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.367 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:58.367 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:58.367 EAL: Hugepages will be freed exactly as allocated. 00:04:58.367 EAL: No shared files mode enabled, IPC is disabled 00:04:58.367 EAL: No shared files mode enabled, IPC is disabled 00:04:58.367 EAL: TSC frequency is ~2400000 KHz 00:04:58.367 EAL: Main lcore 0 is ready (tid=7f802d991a00;cpuset=[0]) 00:04:58.367 EAL: Trying to obtain current memory policy. 00:04:58.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.367 EAL: Restoring previous memory policy: 0 00:04:58.367 EAL: request: mp_malloc_sync 00:04:58.367 EAL: No shared files mode enabled, IPC is disabled 00:04:58.367 EAL: Heap on socket 0 was expanded by 2MB 00:04:58.367 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:58.628 EAL: Mem event callback 'spdk:(nil)' registered 00:04:58.628 00:04:58.628 00:04:58.628 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.628 http://cunit.sourceforge.net/ 00:04:58.628 00:04:58.628 00:04:58.628 Suite: components_suite 00:04:58.628 Test: vtophys_malloc_test ...passed 00:04:58.628 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:58.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.628 EAL: Restoring previous memory policy: 4 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was expanded by 4MB 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was shrunk by 4MB 00:04:58.628 EAL: Trying to obtain current memory policy. 00:04:58.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.628 EAL: Restoring previous memory policy: 4 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was expanded by 6MB 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was shrunk by 6MB 00:04:58.628 EAL: Trying to obtain current memory policy. 00:04:58.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.628 EAL: Restoring previous memory policy: 4 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was expanded by 10MB 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was shrunk by 10MB 00:04:58.628 EAL: Trying to obtain current memory policy. 00:04:58.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.628 EAL: Restoring previous memory policy: 4 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was expanded by 18MB 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.628 EAL: No shared files mode enabled, IPC is disabled 00:04:58.628 EAL: Heap on socket 0 was shrunk by 18MB 00:04:58.628 EAL: Trying to obtain current memory policy. 00:04:58.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.628 EAL: Restoring previous memory policy: 4 00:04:58.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.628 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was expanded by 34MB 00:04:58.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.629 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was shrunk by 34MB 00:04:58.629 EAL: Trying to obtain current memory policy. 00:04:58.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.629 EAL: Restoring previous memory policy: 4 00:04:58.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.629 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was expanded by 66MB 00:04:58.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.629 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was shrunk by 66MB 00:04:58.629 EAL: Trying to obtain current memory policy. 00:04:58.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.629 EAL: Restoring previous memory policy: 4 00:04:58.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.629 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was expanded by 130MB 00:04:58.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.629 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was shrunk by 130MB 00:04:58.629 EAL: Trying to obtain current memory policy. 00:04:58.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.629 EAL: Restoring previous memory policy: 4 00:04:58.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.629 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was expanded by 258MB 00:04:58.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.629 EAL: request: mp_malloc_sync 00:04:58.629 EAL: No shared files mode enabled, IPC is disabled 00:04:58.629 EAL: Heap on socket 0 was shrunk by 258MB 00:04:58.629 EAL: Trying to obtain current memory policy. 00:04:58.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.890 EAL: Restoring previous memory policy: 4 00:04:58.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.890 EAL: request: mp_malloc_sync 00:04:58.890 EAL: No shared files mode enabled, IPC is disabled 00:04:58.890 EAL: Heap on socket 0 was expanded by 514MB 00:04:58.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.890 EAL: request: mp_malloc_sync 00:04:58.890 EAL: No shared files mode enabled, IPC is disabled 00:04:58.890 EAL: Heap on socket 0 was shrunk by 514MB 00:04:58.890 EAL: Trying to obtain current memory policy. 00:04:58.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.150 EAL: Restoring previous memory policy: 4 00:04:59.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.150 EAL: request: mp_malloc_sync 00:04:59.150 EAL: No shared files mode enabled, IPC is disabled 00:04:59.150 EAL: Heap on socket 0 was expanded by 1026MB 00:04:59.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.150 EAL: request: mp_malloc_sync 00:04:59.150 EAL: No shared files mode enabled, IPC is disabled 00:04:59.150 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.150 passed 00:04:59.150 00:04:59.150 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.150 suites 1 1 n/a 0 0 00:04:59.150 tests 2 2 2 0 0 00:04:59.150 asserts 497 497 497 0 n/a 00:04:59.150 00:04:59.150 Elapsed time = 0.659 seconds 00:04:59.150 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.150 EAL: request: mp_malloc_sync 00:04:59.150 EAL: No shared files mode enabled, IPC is disabled 00:04:59.150 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.150 EAL: No shared files mode enabled, IPC is disabled 00:04:59.150 EAL: No shared files mode enabled, IPC is disabled 00:04:59.150 EAL: No shared files mode enabled, IPC is disabled 00:04:59.150 00:04:59.150 real 0m0.802s 00:04:59.150 user 0m0.410s 00:04:59.150 sys 0m0.353s 00:04:59.150 00:16:12 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.150 00:16:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:59.150 ************************************ 00:04:59.150 END TEST env_vtophys 00:04:59.150 ************************************ 00:04:59.150 00:16:12 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.150 00:16:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.150 00:16:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.150 00:16:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.150 00:16:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.411 ************************************ 00:04:59.411 START TEST env_pci 00:04:59.411 ************************************ 00:04:59.411 00:16:12 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.411 00:04:59.411 00:04:59.411 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.411 http://cunit.sourceforge.net/ 00:04:59.411 00:04:59.411 00:04:59.411 Suite: pci 00:04:59.411 Test: pci_hook ...[2024-07-16 00:16:12.795155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 856665 has claimed it 00:04:59.411 EAL: Cannot find device (10000:00:01.0) 00:04:59.411 EAL: Failed to attach device on primary process 00:04:59.411 passed 00:04:59.411 00:04:59.411 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.411 suites 1 1 n/a 0 0 00:04:59.411 tests 1 1 1 0 0 00:04:59.411 asserts 25 25 25 0 n/a 00:04:59.411 00:04:59.411 Elapsed time = 0.023 seconds 00:04:59.411 00:04:59.411 real 0m0.034s 00:04:59.411 user 0m0.009s 00:04:59.411 sys 0m0.024s 00:04:59.411 00:16:12 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.412 00:16:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:59.412 ************************************ 00:04:59.412 END TEST env_pci 00:04:59.412 ************************************ 00:04:59.412 00:16:12 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.412 00:16:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:59.412 00:16:12 env -- env/env.sh@15 -- # uname 00:04:59.412 00:16:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:59.412 00:16:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:59.412 00:16:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.412 00:16:12 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:59.412 00:16:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.412 00:16:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.412 ************************************ 00:04:59.412 START TEST env_dpdk_post_init 00:04:59.412 ************************************ 00:04:59.412 00:16:12 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.412 EAL: Detected CPU lcores: 128 00:04:59.412 EAL: Detected NUMA nodes: 2 00:04:59.412 EAL: Detected shared linkage of DPDK 00:04:59.412 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.412 EAL: Selected IOVA mode 'VA' 00:04:59.412 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.412 EAL: VFIO support initialized 00:04:59.412 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.412 EAL: Using IOMMU type 1 (Type 1) 00:04:59.672 EAL: Ignore mapping IO port bar(1) 00:04:59.672 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:59.933 EAL: Ignore mapping IO port bar(1) 00:04:59.933 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:00.194 EAL: Ignore mapping IO port bar(1) 00:05:00.194 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:00.194 EAL: Ignore mapping IO port bar(1) 00:05:00.454 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:00.454 EAL: Ignore mapping IO port bar(1) 00:05:00.713 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:00.713 EAL: Ignore mapping IO port bar(1) 00:05:00.972 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:00.972 EAL: Ignore mapping IO port bar(1) 00:05:00.972 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:01.231 EAL: Ignore mapping IO port bar(1) 00:05:01.231 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:01.491 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:01.750 EAL: Ignore mapping IO port bar(1) 00:05:01.750 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:02.010 EAL: Ignore mapping IO port bar(1) 00:05:02.010 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:02.010 EAL: Ignore mapping IO port bar(1) 00:05:02.269 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:02.269 EAL: Ignore mapping IO port bar(1) 00:05:02.528 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:02.528 EAL: Ignore mapping IO port bar(1) 00:05:02.787 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:02.787 EAL: Ignore mapping IO port bar(1) 00:05:02.787 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:03.048 EAL: Ignore mapping IO port bar(1) 00:05:03.048 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:03.307 EAL: Ignore mapping IO port bar(1) 00:05:03.307 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:03.307 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:03.307 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:03.588 Starting DPDK initialization... 00:05:03.588 Starting SPDK post initialization... 00:05:03.588 SPDK NVMe probe 00:05:03.588 Attaching to 0000:65:00.0 00:05:03.588 Attached to 0000:65:00.0 00:05:03.588 Cleaning up... 00:05:05.500 00:05:05.500 real 0m5.731s 00:05:05.500 user 0m0.192s 00:05:05.500 sys 0m0.080s 00:05:05.500 00:16:18 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.500 00:16:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.500 ************************************ 00:05:05.500 END TEST env_dpdk_post_init 00:05:05.500 ************************************ 00:05:05.500 00:16:18 env -- common/autotest_common.sh@1142 -- # return 0 00:05:05.500 00:16:18 env -- env/env.sh@26 -- # uname 00:05:05.500 00:16:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:05.500 00:16:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.500 00:16:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.500 00:16:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.500 00:16:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.500 ************************************ 00:05:05.500 START TEST env_mem_callbacks 00:05:05.500 ************************************ 00:05:05.500 00:16:18 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.500 EAL: Detected CPU lcores: 128 00:05:05.500 EAL: Detected NUMA nodes: 2 00:05:05.500 EAL: Detected shared linkage of DPDK 00:05:05.500 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.500 EAL: Selected IOVA mode 'VA' 00:05:05.500 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.500 EAL: VFIO support initialized 00:05:05.500 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.500 00:05:05.500 00:05:05.500 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.500 http://cunit.sourceforge.net/ 00:05:05.500 00:05:05.500 00:05:05.500 Suite: memory 00:05:05.500 Test: test ... 00:05:05.500 register 0x200000200000 2097152 00:05:05.500 malloc 3145728 00:05:05.500 register 0x200000400000 4194304 00:05:05.500 buf 0x200000500000 len 3145728 PASSED 00:05:05.500 malloc 64 00:05:05.500 buf 0x2000004fff40 len 64 PASSED 00:05:05.500 malloc 4194304 00:05:05.500 register 0x200000800000 6291456 00:05:05.500 buf 0x200000a00000 len 4194304 PASSED 00:05:05.500 free 0x200000500000 3145728 00:05:05.500 free 0x2000004fff40 64 00:05:05.500 unregister 0x200000400000 4194304 PASSED 00:05:05.500 free 0x200000a00000 4194304 00:05:05.500 unregister 0x200000800000 6291456 PASSED 00:05:05.500 malloc 8388608 00:05:05.500 register 0x200000400000 10485760 00:05:05.500 buf 0x200000600000 len 8388608 PASSED 00:05:05.500 free 0x200000600000 8388608 00:05:05.501 unregister 0x200000400000 10485760 PASSED 00:05:05.501 passed 00:05:05.501 00:05:05.501 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.501 suites 1 1 n/a 0 0 00:05:05.501 tests 1 1 1 0 0 00:05:05.501 asserts 15 15 15 0 n/a 00:05:05.501 00:05:05.501 Elapsed time = 0.007 seconds 00:05:05.501 00:05:05.501 real 0m0.065s 00:05:05.501 user 0m0.020s 00:05:05.501 sys 0m0.045s 00:05:05.501 00:16:18 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.501 00:16:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 ************************************ 00:05:05.501 END TEST env_mem_callbacks 00:05:05.501 ************************************ 00:05:05.501 00:16:18 env -- common/autotest_common.sh@1142 -- # return 0 00:05:05.501 00:05:05.501 real 0m7.315s 00:05:05.501 user 0m1.020s 00:05:05.501 sys 0m0.825s 00:05:05.501 00:16:18 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.501 00:16:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 ************************************ 00:05:05.501 END TEST env 00:05:05.501 ************************************ 00:05:05.501 00:16:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.501 00:16:18 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.501 00:16:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.501 00:16:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.501 00:16:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 ************************************ 00:05:05.501 START TEST rpc 00:05:05.501 ************************************ 00:05:05.501 00:16:18 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.501 * Looking for test storage... 00:05:05.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.501 00:16:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=858091 00:05:05.501 00:16:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.501 00:16:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 858091 00:05:05.501 00:16:18 rpc -- common/autotest_common.sh@829 -- # '[' -z 858091 ']' 00:05:05.501 00:16:18 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.501 00:16:18 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.501 00:16:18 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.501 00:16:18 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.501 00:16:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 00:16:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:05.501 [2024-07-16 00:16:19.039321] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:05.501 [2024-07-16 00:16:19.039377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858091 ] 00:05:05.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.501 [2024-07-16 00:16:19.107023] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.761 [2024-07-16 00:16:19.176138] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:05.761 [2024-07-16 00:16:19.176177] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 858091' to capture a snapshot of events at runtime. 00:05:05.761 [2024-07-16 00:16:19.176184] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.761 [2024-07-16 00:16:19.176191] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.761 [2024-07-16 00:16:19.176196] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid858091 for offline analysis/debug. 00:05:05.761 [2024-07-16 00:16:19.176215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.332 00:16:19 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.332 00:16:19 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:06.332 00:16:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:06.332 00:16:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:06.332 00:16:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:06.332 00:16:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:06.332 00:16:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.332 00:16:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.332 00:16:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.332 ************************************ 00:05:06.332 START TEST rpc_integrity 00:05:06.332 ************************************ 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.332 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.332 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.332 { 00:05:06.332 "name": "Malloc0", 00:05:06.332 "aliases": [ 00:05:06.332 "d071cc8d-78c9-4561-969b-bbe50201cfed" 00:05:06.332 ], 00:05:06.332 "product_name": "Malloc disk", 00:05:06.332 "block_size": 512, 00:05:06.332 "num_blocks": 16384, 00:05:06.332 "uuid": "d071cc8d-78c9-4561-969b-bbe50201cfed", 00:05:06.332 "assigned_rate_limits": { 00:05:06.332 "rw_ios_per_sec": 0, 00:05:06.332 "rw_mbytes_per_sec": 0, 00:05:06.332 "r_mbytes_per_sec": 0, 00:05:06.332 "w_mbytes_per_sec": 0 00:05:06.332 }, 00:05:06.332 "claimed": false, 00:05:06.332 "zoned": false, 00:05:06.332 "supported_io_types": { 00:05:06.332 "read": true, 00:05:06.332 "write": true, 00:05:06.332 "unmap": true, 00:05:06.332 "flush": true, 00:05:06.332 "reset": true, 00:05:06.332 "nvme_admin": false, 00:05:06.332 "nvme_io": false, 00:05:06.332 "nvme_io_md": false, 00:05:06.332 "write_zeroes": true, 00:05:06.332 "zcopy": true, 00:05:06.332 "get_zone_info": false, 00:05:06.332 "zone_management": false, 00:05:06.332 "zone_append": false, 00:05:06.332 "compare": false, 00:05:06.332 "compare_and_write": false, 00:05:06.332 "abort": true, 00:05:06.332 "seek_hole": false, 00:05:06.332 "seek_data": false, 00:05:06.332 "copy": true, 00:05:06.332 "nvme_iov_md": false 00:05:06.332 }, 00:05:06.332 "memory_domains": [ 00:05:06.332 { 00:05:06.332 "dma_device_id": "system", 00:05:06.332 "dma_device_type": 1 00:05:06.332 }, 00:05:06.332 { 00:05:06.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.332 "dma_device_type": 2 00:05:06.332 } 00:05:06.332 ], 00:05:06.333 "driver_specific": {} 00:05:06.333 } 00:05:06.333 ]' 00:05:06.333 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:06.333 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.333 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:06.333 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.333 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.333 [2024-07-16 00:16:19.933361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:06.333 [2024-07-16 00:16:19.933392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.333 [2024-07-16 00:16:19.933405] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x217ac50 00:05:06.333 [2024-07-16 00:16:19.933411] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:06.333 [2024-07-16 00:16:19.934719] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:06.333 [2024-07-16 00:16:19.934740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:06.333 Passthru0 00:05:06.333 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.333 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:06.333 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.333 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.594 00:16:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.594 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:06.594 { 00:05:06.594 "name": "Malloc0", 00:05:06.594 "aliases": [ 00:05:06.594 "d071cc8d-78c9-4561-969b-bbe50201cfed" 00:05:06.594 ], 00:05:06.594 "product_name": "Malloc disk", 00:05:06.594 "block_size": 512, 00:05:06.594 "num_blocks": 16384, 00:05:06.594 "uuid": "d071cc8d-78c9-4561-969b-bbe50201cfed", 00:05:06.594 "assigned_rate_limits": { 00:05:06.594 "rw_ios_per_sec": 0, 00:05:06.594 "rw_mbytes_per_sec": 0, 00:05:06.594 "r_mbytes_per_sec": 0, 00:05:06.594 "w_mbytes_per_sec": 0 00:05:06.595 }, 00:05:06.595 "claimed": true, 00:05:06.595 "claim_type": "exclusive_write", 00:05:06.595 "zoned": false, 00:05:06.595 "supported_io_types": { 00:05:06.595 "read": true, 00:05:06.595 "write": true, 00:05:06.595 "unmap": true, 00:05:06.595 "flush": true, 00:05:06.595 "reset": true, 00:05:06.595 "nvme_admin": false, 00:05:06.595 "nvme_io": false, 00:05:06.595 "nvme_io_md": false, 00:05:06.595 "write_zeroes": true, 00:05:06.595 "zcopy": true, 00:05:06.595 "get_zone_info": false, 00:05:06.595 "zone_management": false, 00:05:06.595 "zone_append": false, 00:05:06.595 "compare": false, 00:05:06.595 "compare_and_write": false, 00:05:06.595 "abort": true, 00:05:06.595 "seek_hole": false, 00:05:06.595 "seek_data": false, 00:05:06.595 "copy": true, 00:05:06.595 "nvme_iov_md": false 00:05:06.595 }, 00:05:06.595 "memory_domains": [ 00:05:06.595 { 00:05:06.595 "dma_device_id": "system", 00:05:06.595 "dma_device_type": 1 00:05:06.595 }, 00:05:06.595 { 00:05:06.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.595 "dma_device_type": 2 00:05:06.595 } 00:05:06.595 ], 00:05:06.595 "driver_specific": {} 00:05:06.595 }, 00:05:06.595 { 00:05:06.595 "name": "Passthru0", 00:05:06.595 "aliases": [ 00:05:06.595 "5fdc6eba-6d9a-5682-8be2-847c841525e8" 00:05:06.595 ], 00:05:06.595 "product_name": "passthru", 00:05:06.595 "block_size": 512, 00:05:06.595 "num_blocks": 16384, 00:05:06.595 "uuid": "5fdc6eba-6d9a-5682-8be2-847c841525e8", 00:05:06.595 "assigned_rate_limits": { 00:05:06.595 "rw_ios_per_sec": 0, 00:05:06.595 "rw_mbytes_per_sec": 0, 00:05:06.595 "r_mbytes_per_sec": 0, 00:05:06.595 "w_mbytes_per_sec": 0 00:05:06.595 }, 00:05:06.595 "claimed": false, 00:05:06.595 "zoned": false, 00:05:06.595 "supported_io_types": { 00:05:06.595 "read": true, 00:05:06.595 "write": true, 00:05:06.595 "unmap": true, 00:05:06.595 "flush": true, 00:05:06.595 "reset": true, 00:05:06.595 "nvme_admin": false, 00:05:06.595 "nvme_io": false, 00:05:06.595 "nvme_io_md": false, 00:05:06.595 "write_zeroes": true, 00:05:06.595 "zcopy": true, 00:05:06.595 "get_zone_info": false, 00:05:06.595 "zone_management": false, 00:05:06.595 "zone_append": false, 00:05:06.595 "compare": false, 00:05:06.595 "compare_and_write": false, 00:05:06.595 "abort": true, 00:05:06.595 "seek_hole": false, 00:05:06.595 "seek_data": false, 00:05:06.595 "copy": true, 00:05:06.595 "nvme_iov_md": false 00:05:06.595 }, 00:05:06.595 "memory_domains": [ 00:05:06.595 { 00:05:06.595 "dma_device_id": "system", 00:05:06.595 "dma_device_type": 1 00:05:06.595 }, 00:05:06.595 { 00:05:06.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.595 "dma_device_type": 2 00:05:06.595 } 00:05:06.595 ], 00:05:06.595 "driver_specific": { 00:05:06.595 "passthru": { 00:05:06.595 "name": "Passthru0", 00:05:06.595 "base_bdev_name": "Malloc0" 00:05:06.595 } 00:05:06.595 } 00:05:06.595 } 00:05:06.595 ]' 00:05:06.595 00:16:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:06.595 00:16:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.595 00:16:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.595 00:16:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.595 00:16:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.595 00:16:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:06.595 00:16:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:06.595 00:16:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:06.595 00:05:06.595 real 0m0.294s 00:05:06.595 user 0m0.191s 00:05:06.595 sys 0m0.036s 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.595 00:16:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 ************************************ 00:05:06.595 END TEST rpc_integrity 00:05:06.595 ************************************ 00:05:06.595 00:16:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.595 00:16:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:06.595 00:16:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.595 00:16:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.595 00:16:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 ************************************ 00:05:06.595 START TEST rpc_plugins 00:05:06.595 ************************************ 00:05:06.595 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:06.595 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:06.595 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.595 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.595 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:06.595 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:06.595 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.595 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.595 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.595 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:06.595 { 00:05:06.595 "name": "Malloc1", 00:05:06.595 "aliases": [ 00:05:06.595 "fc6e018b-9873-48d5-a535-ab28adab39cc" 00:05:06.595 ], 00:05:06.595 "product_name": "Malloc disk", 00:05:06.595 "block_size": 4096, 00:05:06.595 "num_blocks": 256, 00:05:06.595 "uuid": "fc6e018b-9873-48d5-a535-ab28adab39cc", 00:05:06.595 "assigned_rate_limits": { 00:05:06.595 "rw_ios_per_sec": 0, 00:05:06.595 "rw_mbytes_per_sec": 0, 00:05:06.595 "r_mbytes_per_sec": 0, 00:05:06.595 "w_mbytes_per_sec": 0 00:05:06.595 }, 00:05:06.595 "claimed": false, 00:05:06.595 "zoned": false, 00:05:06.595 "supported_io_types": { 00:05:06.595 "read": true, 00:05:06.595 "write": true, 00:05:06.595 "unmap": true, 00:05:06.595 "flush": true, 00:05:06.595 "reset": true, 00:05:06.595 "nvme_admin": false, 00:05:06.595 "nvme_io": false, 00:05:06.595 "nvme_io_md": false, 00:05:06.595 "write_zeroes": true, 00:05:06.595 "zcopy": true, 00:05:06.595 "get_zone_info": false, 00:05:06.595 "zone_management": false, 00:05:06.595 "zone_append": false, 00:05:06.595 "compare": false, 00:05:06.595 "compare_and_write": false, 00:05:06.595 "abort": true, 00:05:06.595 "seek_hole": false, 00:05:06.595 "seek_data": false, 00:05:06.595 "copy": true, 00:05:06.595 "nvme_iov_md": false 00:05:06.595 }, 00:05:06.595 "memory_domains": [ 00:05:06.595 { 00:05:06.595 "dma_device_id": "system", 00:05:06.595 "dma_device_type": 1 00:05:06.595 }, 00:05:06.595 { 00:05:06.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.595 "dma_device_type": 2 00:05:06.595 } 00:05:06.595 ], 00:05:06.595 "driver_specific": {} 00:05:06.595 } 00:05:06.595 ]' 00:05:06.595 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:06.856 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:06.856 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.856 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.856 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:06.856 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:06.856 00:16:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:06.856 00:05:06.856 real 0m0.140s 00:05:06.856 user 0m0.090s 00:05:06.856 sys 0m0.015s 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.856 00:16:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.856 ************************************ 00:05:06.856 END TEST rpc_plugins 00:05:06.856 ************************************ 00:05:06.856 00:16:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.856 00:16:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:06.856 00:16:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.856 00:16:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.856 00:16:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.856 ************************************ 00:05:06.856 START TEST rpc_trace_cmd_test 00:05:06.856 ************************************ 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:06.856 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid858091", 00:05:06.856 "tpoint_group_mask": "0x8", 00:05:06.856 "iscsi_conn": { 00:05:06.856 "mask": "0x2", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "scsi": { 00:05:06.856 "mask": "0x4", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "bdev": { 00:05:06.856 "mask": "0x8", 00:05:06.856 "tpoint_mask": "0xffffffffffffffff" 00:05:06.856 }, 00:05:06.856 "nvmf_rdma": { 00:05:06.856 "mask": "0x10", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "nvmf_tcp": { 00:05:06.856 "mask": "0x20", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "ftl": { 00:05:06.856 "mask": "0x40", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "blobfs": { 00:05:06.856 "mask": "0x80", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "dsa": { 00:05:06.856 "mask": "0x200", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "thread": { 00:05:06.856 "mask": "0x400", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "nvme_pcie": { 00:05:06.856 "mask": "0x800", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "iaa": { 00:05:06.856 "mask": "0x1000", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "nvme_tcp": { 00:05:06.856 "mask": "0x2000", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "bdev_nvme": { 00:05:06.856 "mask": "0x4000", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 }, 00:05:06.856 "sock": { 00:05:06.856 "mask": "0x8000", 00:05:06.856 "tpoint_mask": "0x0" 00:05:06.856 } 00:05:06.856 }' 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:06.856 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:06.857 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:06.857 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:06.857 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:07.117 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:07.117 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:07.117 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:07.117 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:07.117 00:16:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:07.117 00:05:07.117 real 0m0.230s 00:05:07.117 user 0m0.193s 00:05:07.117 sys 0m0.030s 00:05:07.117 00:16:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.117 00:16:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:07.117 ************************************ 00:05:07.117 END TEST rpc_trace_cmd_test 00:05:07.117 ************************************ 00:05:07.117 00:16:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:07.117 00:16:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:07.117 00:16:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:07.117 00:16:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:07.117 00:16:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.117 00:16:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.117 00:16:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.117 ************************************ 00:05:07.117 START TEST rpc_daemon_integrity 00:05:07.117 ************************************ 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.117 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.118 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.118 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.118 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.118 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:07.118 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.118 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.118 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.421 { 00:05:07.421 "name": "Malloc2", 00:05:07.421 "aliases": [ 00:05:07.421 "1c7d5a75-144f-4211-b29e-c2b6e4d5a2bb" 00:05:07.421 ], 00:05:07.421 "product_name": "Malloc disk", 00:05:07.421 "block_size": 512, 00:05:07.421 "num_blocks": 16384, 00:05:07.421 "uuid": "1c7d5a75-144f-4211-b29e-c2b6e4d5a2bb", 00:05:07.421 "assigned_rate_limits": { 00:05:07.421 "rw_ios_per_sec": 0, 00:05:07.421 "rw_mbytes_per_sec": 0, 00:05:07.421 "r_mbytes_per_sec": 0, 00:05:07.421 "w_mbytes_per_sec": 0 00:05:07.421 }, 00:05:07.421 "claimed": false, 00:05:07.421 "zoned": false, 00:05:07.421 "supported_io_types": { 00:05:07.421 "read": true, 00:05:07.421 "write": true, 00:05:07.421 "unmap": true, 00:05:07.421 "flush": true, 00:05:07.421 "reset": true, 00:05:07.421 "nvme_admin": false, 00:05:07.421 "nvme_io": false, 00:05:07.421 "nvme_io_md": false, 00:05:07.421 "write_zeroes": true, 00:05:07.421 "zcopy": true, 00:05:07.421 "get_zone_info": false, 00:05:07.421 "zone_management": false, 00:05:07.421 "zone_append": false, 00:05:07.421 "compare": false, 00:05:07.421 "compare_and_write": false, 00:05:07.421 "abort": true, 00:05:07.421 "seek_hole": false, 00:05:07.421 "seek_data": false, 00:05:07.421 "copy": true, 00:05:07.421 "nvme_iov_md": false 00:05:07.421 }, 00:05:07.421 "memory_domains": [ 00:05:07.421 { 00:05:07.421 "dma_device_id": "system", 00:05:07.421 "dma_device_type": 1 00:05:07.421 }, 00:05:07.421 { 00:05:07.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.421 "dma_device_type": 2 00:05:07.421 } 00:05:07.421 ], 00:05:07.421 "driver_specific": {} 00:05:07.421 } 00:05:07.421 ]' 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 [2024-07-16 00:16:20.803720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:07.421 [2024-07-16 00:16:20.803750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.421 [2024-07-16 00:16:20.803764] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x217d700 00:05:07.421 [2024-07-16 00:16:20.803771] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.421 [2024-07-16 00:16:20.804980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.421 [2024-07-16 00:16:20.805002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.421 Passthru0 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.421 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.421 { 00:05:07.421 "name": "Malloc2", 00:05:07.421 "aliases": [ 00:05:07.421 "1c7d5a75-144f-4211-b29e-c2b6e4d5a2bb" 00:05:07.421 ], 00:05:07.421 "product_name": "Malloc disk", 00:05:07.421 "block_size": 512, 00:05:07.421 "num_blocks": 16384, 00:05:07.421 "uuid": "1c7d5a75-144f-4211-b29e-c2b6e4d5a2bb", 00:05:07.421 "assigned_rate_limits": { 00:05:07.421 "rw_ios_per_sec": 0, 00:05:07.421 "rw_mbytes_per_sec": 0, 00:05:07.421 "r_mbytes_per_sec": 0, 00:05:07.421 "w_mbytes_per_sec": 0 00:05:07.421 }, 00:05:07.421 "claimed": true, 00:05:07.421 "claim_type": "exclusive_write", 00:05:07.421 "zoned": false, 00:05:07.421 "supported_io_types": { 00:05:07.421 "read": true, 00:05:07.421 "write": true, 00:05:07.421 "unmap": true, 00:05:07.421 "flush": true, 00:05:07.421 "reset": true, 00:05:07.421 "nvme_admin": false, 00:05:07.421 "nvme_io": false, 00:05:07.421 "nvme_io_md": false, 00:05:07.421 "write_zeroes": true, 00:05:07.421 "zcopy": true, 00:05:07.421 "get_zone_info": false, 00:05:07.421 "zone_management": false, 00:05:07.421 "zone_append": false, 00:05:07.421 "compare": false, 00:05:07.421 "compare_and_write": false, 00:05:07.421 "abort": true, 00:05:07.421 "seek_hole": false, 00:05:07.421 "seek_data": false, 00:05:07.421 "copy": true, 00:05:07.421 "nvme_iov_md": false 00:05:07.421 }, 00:05:07.421 "memory_domains": [ 00:05:07.421 { 00:05:07.421 "dma_device_id": "system", 00:05:07.421 "dma_device_type": 1 00:05:07.421 }, 00:05:07.421 { 00:05:07.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.421 "dma_device_type": 2 00:05:07.421 } 00:05:07.421 ], 00:05:07.421 "driver_specific": {} 00:05:07.421 }, 00:05:07.421 { 00:05:07.421 "name": "Passthru0", 00:05:07.421 "aliases": [ 00:05:07.421 "6ca8fc87-213d-5edf-a41d-12663300b2e0" 00:05:07.421 ], 00:05:07.421 "product_name": "passthru", 00:05:07.421 "block_size": 512, 00:05:07.421 "num_blocks": 16384, 00:05:07.421 "uuid": "6ca8fc87-213d-5edf-a41d-12663300b2e0", 00:05:07.421 "assigned_rate_limits": { 00:05:07.422 "rw_ios_per_sec": 0, 00:05:07.422 "rw_mbytes_per_sec": 0, 00:05:07.422 "r_mbytes_per_sec": 0, 00:05:07.422 "w_mbytes_per_sec": 0 00:05:07.422 }, 00:05:07.422 "claimed": false, 00:05:07.422 "zoned": false, 00:05:07.422 "supported_io_types": { 00:05:07.422 "read": true, 00:05:07.422 "write": true, 00:05:07.422 "unmap": true, 00:05:07.422 "flush": true, 00:05:07.422 "reset": true, 00:05:07.422 "nvme_admin": false, 00:05:07.422 "nvme_io": false, 00:05:07.422 "nvme_io_md": false, 00:05:07.422 "write_zeroes": true, 00:05:07.422 "zcopy": true, 00:05:07.422 "get_zone_info": false, 00:05:07.422 "zone_management": false, 00:05:07.422 "zone_append": false, 00:05:07.422 "compare": false, 00:05:07.422 "compare_and_write": false, 00:05:07.422 "abort": true, 00:05:07.422 "seek_hole": false, 00:05:07.422 "seek_data": false, 00:05:07.422 "copy": true, 00:05:07.422 "nvme_iov_md": false 00:05:07.422 }, 00:05:07.422 "memory_domains": [ 00:05:07.422 { 00:05:07.422 "dma_device_id": "system", 00:05:07.422 "dma_device_type": 1 00:05:07.422 }, 00:05:07.422 { 00:05:07.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.422 "dma_device_type": 2 00:05:07.422 } 00:05:07.422 ], 00:05:07.422 "driver_specific": { 00:05:07.422 "passthru": { 00:05:07.422 "name": "Passthru0", 00:05:07.422 "base_bdev_name": "Malloc2" 00:05:07.422 } 00:05:07.422 } 00:05:07.422 } 00:05:07.422 ]' 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.422 00:05:07.422 real 0m0.283s 00:05:07.422 user 0m0.182s 00:05:07.422 sys 0m0.038s 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.422 00:16:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.422 ************************************ 00:05:07.422 END TEST rpc_daemon_integrity 00:05:07.422 ************************************ 00:05:07.422 00:16:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:07.422 00:16:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:07.422 00:16:20 rpc -- rpc/rpc.sh@84 -- # killprocess 858091 00:05:07.422 00:16:20 rpc -- common/autotest_common.sh@948 -- # '[' -z 858091 ']' 00:05:07.422 00:16:20 rpc -- common/autotest_common.sh@952 -- # kill -0 858091 00:05:07.422 00:16:20 rpc -- common/autotest_common.sh@953 -- # uname 00:05:07.422 00:16:20 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.422 00:16:20 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858091 00:05:07.727 00:16:21 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.727 00:16:21 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.727 00:16:21 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858091' 00:05:07.727 killing process with pid 858091 00:05:07.727 00:16:21 rpc -- common/autotest_common.sh@967 -- # kill 858091 00:05:07.727 00:16:21 rpc -- common/autotest_common.sh@972 -- # wait 858091 00:05:07.727 00:05:07.727 real 0m2.347s 00:05:07.727 user 0m3.041s 00:05:07.727 sys 0m0.679s 00:05:07.727 00:16:21 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.727 00:16:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.727 ************************************ 00:05:07.727 END TEST rpc 00:05:07.727 ************************************ 00:05:07.727 00:16:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.727 00:16:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:07.727 00:16:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.727 00:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.727 00:16:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.727 ************************************ 00:05:07.727 START TEST skip_rpc 00:05:07.727 ************************************ 00:05:07.728 00:16:21 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:08.009 * Looking for test storage... 00:05:08.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:08.009 00:16:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.009 00:16:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.009 00:16:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:08.009 00:16:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.009 00:16:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.009 00:16:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.009 ************************************ 00:05:08.009 START TEST skip_rpc 00:05:08.009 ************************************ 00:05:08.009 00:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:08.009 00:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=858641 00:05:08.009 00:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.009 00:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:08.009 00:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:08.009 [2024-07-16 00:16:21.534887] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:08.009 [2024-07-16 00:16:21.534946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858641 ] 00:05:08.009 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.009 [2024-07-16 00:16:21.608657] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.270 [2024-07-16 00:16:21.684899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 858641 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 858641 ']' 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 858641 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858641 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858641' 00:05:13.554 killing process with pid 858641 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 858641 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 858641 00:05:13.554 00:05:13.554 real 0m5.280s 00:05:13.554 user 0m5.062s 00:05:13.554 sys 0m0.252s 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.554 00:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.554 ************************************ 00:05:13.554 END TEST skip_rpc 00:05:13.554 ************************************ 00:05:13.554 00:16:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:13.554 00:16:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:13.554 00:16:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.554 00:16:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.554 00:16:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.554 ************************************ 00:05:13.554 START TEST skip_rpc_with_json 00:05:13.554 ************************************ 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=859708 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 859708 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 859708 ']' 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.554 00:16:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.554 [2024-07-16 00:16:26.880102] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:13.554 [2024-07-16 00:16:26.880157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859708 ] 00:05:13.554 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.554 [2024-07-16 00:16:26.948895] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.554 [2024-07-16 00:16:27.021964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.126 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.126 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:14.126 00:16:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:14.126 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.126 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.126 [2024-07-16 00:16:27.644115] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:14.126 request: 00:05:14.126 { 00:05:14.126 "trtype": "tcp", 00:05:14.126 "method": "nvmf_get_transports", 00:05:14.126 "req_id": 1 00:05:14.126 } 00:05:14.126 Got JSON-RPC error response 00:05:14.126 response: 00:05:14.126 { 00:05:14.126 "code": -19, 00:05:14.126 "message": "No such device" 00:05:14.126 } 00:05:14.126 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:14.127 00:16:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:14.127 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.127 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.127 [2024-07-16 00:16:27.656243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.127 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.127 00:16:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:14.127 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.127 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.387 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.387 00:16:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.387 { 00:05:14.387 "subsystems": [ 00:05:14.387 { 00:05:14.387 "subsystem": "vfio_user_target", 00:05:14.387 "config": null 00:05:14.387 }, 00:05:14.387 { 00:05:14.387 "subsystem": "keyring", 00:05:14.387 "config": [] 00:05:14.387 }, 00:05:14.387 { 00:05:14.387 "subsystem": "iobuf", 00:05:14.387 "config": [ 00:05:14.387 { 00:05:14.387 "method": "iobuf_set_options", 00:05:14.387 "params": { 00:05:14.387 "small_pool_count": 8192, 00:05:14.387 "large_pool_count": 1024, 00:05:14.387 "small_bufsize": 8192, 00:05:14.387 "large_bufsize": 135168 00:05:14.387 } 00:05:14.387 } 00:05:14.387 ] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "sock", 00:05:14.388 "config": [ 00:05:14.388 { 00:05:14.388 "method": "sock_set_default_impl", 00:05:14.388 "params": { 00:05:14.388 "impl_name": "posix" 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "sock_impl_set_options", 00:05:14.388 "params": { 00:05:14.388 "impl_name": "ssl", 00:05:14.388 "recv_buf_size": 4096, 00:05:14.388 "send_buf_size": 4096, 00:05:14.388 "enable_recv_pipe": true, 00:05:14.388 "enable_quickack": false, 00:05:14.388 "enable_placement_id": 0, 00:05:14.388 "enable_zerocopy_send_server": true, 00:05:14.388 "enable_zerocopy_send_client": false, 00:05:14.388 "zerocopy_threshold": 0, 00:05:14.388 "tls_version": 0, 00:05:14.388 "enable_ktls": false 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "sock_impl_set_options", 00:05:14.388 "params": { 00:05:14.388 "impl_name": "posix", 00:05:14.388 "recv_buf_size": 2097152, 00:05:14.388 "send_buf_size": 2097152, 00:05:14.388 "enable_recv_pipe": true, 00:05:14.388 "enable_quickack": false, 00:05:14.388 "enable_placement_id": 0, 00:05:14.388 "enable_zerocopy_send_server": true, 00:05:14.388 "enable_zerocopy_send_client": false, 00:05:14.388 "zerocopy_threshold": 0, 00:05:14.388 "tls_version": 0, 00:05:14.388 "enable_ktls": false 00:05:14.388 } 00:05:14.388 } 00:05:14.388 ] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "vmd", 00:05:14.388 "config": [] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "accel", 00:05:14.388 "config": [ 00:05:14.388 { 00:05:14.388 "method": "accel_set_options", 00:05:14.388 "params": { 00:05:14.388 "small_cache_size": 128, 00:05:14.388 "large_cache_size": 16, 00:05:14.388 "task_count": 2048, 00:05:14.388 "sequence_count": 2048, 00:05:14.388 "buf_count": 2048 00:05:14.388 } 00:05:14.388 } 00:05:14.388 ] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "bdev", 00:05:14.388 "config": [ 00:05:14.388 { 00:05:14.388 "method": "bdev_set_options", 00:05:14.388 "params": { 00:05:14.388 "bdev_io_pool_size": 65535, 00:05:14.388 "bdev_io_cache_size": 256, 00:05:14.388 "bdev_auto_examine": true, 00:05:14.388 "iobuf_small_cache_size": 128, 00:05:14.388 "iobuf_large_cache_size": 16 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "bdev_raid_set_options", 00:05:14.388 "params": { 00:05:14.388 "process_window_size_kb": 1024 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "bdev_iscsi_set_options", 00:05:14.388 "params": { 00:05:14.388 "timeout_sec": 30 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "bdev_nvme_set_options", 00:05:14.388 "params": { 00:05:14.388 "action_on_timeout": "none", 00:05:14.388 "timeout_us": 0, 00:05:14.388 "timeout_admin_us": 0, 00:05:14.388 "keep_alive_timeout_ms": 10000, 00:05:14.388 "arbitration_burst": 0, 00:05:14.388 "low_priority_weight": 0, 00:05:14.388 "medium_priority_weight": 0, 00:05:14.388 "high_priority_weight": 0, 00:05:14.388 "nvme_adminq_poll_period_us": 10000, 00:05:14.388 "nvme_ioq_poll_period_us": 0, 00:05:14.388 "io_queue_requests": 0, 00:05:14.388 "delay_cmd_submit": true, 00:05:14.388 "transport_retry_count": 4, 00:05:14.388 "bdev_retry_count": 3, 00:05:14.388 "transport_ack_timeout": 0, 00:05:14.388 "ctrlr_loss_timeout_sec": 0, 00:05:14.388 "reconnect_delay_sec": 0, 00:05:14.388 "fast_io_fail_timeout_sec": 0, 00:05:14.388 "disable_auto_failback": false, 00:05:14.388 "generate_uuids": false, 00:05:14.388 "transport_tos": 0, 00:05:14.388 "nvme_error_stat": false, 00:05:14.388 "rdma_srq_size": 0, 00:05:14.388 "io_path_stat": false, 00:05:14.388 "allow_accel_sequence": false, 00:05:14.388 "rdma_max_cq_size": 0, 00:05:14.388 "rdma_cm_event_timeout_ms": 0, 00:05:14.388 "dhchap_digests": [ 00:05:14.388 "sha256", 00:05:14.388 "sha384", 00:05:14.388 "sha512" 00:05:14.388 ], 00:05:14.388 "dhchap_dhgroups": [ 00:05:14.388 "null", 00:05:14.388 "ffdhe2048", 00:05:14.388 "ffdhe3072", 00:05:14.388 "ffdhe4096", 00:05:14.388 "ffdhe6144", 00:05:14.388 "ffdhe8192" 00:05:14.388 ] 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "bdev_nvme_set_hotplug", 00:05:14.388 "params": { 00:05:14.388 "period_us": 100000, 00:05:14.388 "enable": false 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "bdev_wait_for_examine" 00:05:14.388 } 00:05:14.388 ] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "scsi", 00:05:14.388 "config": null 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "scheduler", 00:05:14.388 "config": [ 00:05:14.388 { 00:05:14.388 "method": "framework_set_scheduler", 00:05:14.388 "params": { 00:05:14.388 "name": "static" 00:05:14.388 } 00:05:14.388 } 00:05:14.388 ] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "vhost_scsi", 00:05:14.388 "config": [] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "vhost_blk", 00:05:14.388 "config": [] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "ublk", 00:05:14.388 "config": [] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "nbd", 00:05:14.388 "config": [] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "nvmf", 00:05:14.388 "config": [ 00:05:14.388 { 00:05:14.388 "method": "nvmf_set_config", 00:05:14.388 "params": { 00:05:14.388 "discovery_filter": "match_any", 00:05:14.388 "admin_cmd_passthru": { 00:05:14.388 "identify_ctrlr": false 00:05:14.388 } 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "nvmf_set_max_subsystems", 00:05:14.388 "params": { 00:05:14.388 "max_subsystems": 1024 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "nvmf_set_crdt", 00:05:14.388 "params": { 00:05:14.388 "crdt1": 0, 00:05:14.388 "crdt2": 0, 00:05:14.388 "crdt3": 0 00:05:14.388 } 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "method": "nvmf_create_transport", 00:05:14.388 "params": { 00:05:14.388 "trtype": "TCP", 00:05:14.388 "max_queue_depth": 128, 00:05:14.388 "max_io_qpairs_per_ctrlr": 127, 00:05:14.388 "in_capsule_data_size": 4096, 00:05:14.388 "max_io_size": 131072, 00:05:14.388 "io_unit_size": 131072, 00:05:14.388 "max_aq_depth": 128, 00:05:14.388 "num_shared_buffers": 511, 00:05:14.388 "buf_cache_size": 4294967295, 00:05:14.388 "dif_insert_or_strip": false, 00:05:14.388 "zcopy": false, 00:05:14.388 "c2h_success": true, 00:05:14.388 "sock_priority": 0, 00:05:14.388 "abort_timeout_sec": 1, 00:05:14.388 "ack_timeout": 0, 00:05:14.388 "data_wr_pool_size": 0 00:05:14.388 } 00:05:14.388 } 00:05:14.388 ] 00:05:14.388 }, 00:05:14.388 { 00:05:14.388 "subsystem": "iscsi", 00:05:14.388 "config": [ 00:05:14.388 { 00:05:14.388 "method": "iscsi_set_options", 00:05:14.388 "params": { 00:05:14.388 "node_base": "iqn.2016-06.io.spdk", 00:05:14.388 "max_sessions": 128, 00:05:14.388 "max_connections_per_session": 2, 00:05:14.388 "max_queue_depth": 64, 00:05:14.388 "default_time2wait": 2, 00:05:14.388 "default_time2retain": 20, 00:05:14.389 "first_burst_length": 8192, 00:05:14.389 "immediate_data": true, 00:05:14.389 "allow_duplicated_isid": false, 00:05:14.389 "error_recovery_level": 0, 00:05:14.389 "nop_timeout": 60, 00:05:14.389 "nop_in_interval": 30, 00:05:14.389 "disable_chap": false, 00:05:14.389 "require_chap": false, 00:05:14.389 "mutual_chap": false, 00:05:14.389 "chap_group": 0, 00:05:14.389 "max_large_datain_per_connection": 64, 00:05:14.389 "max_r2t_per_connection": 4, 00:05:14.389 "pdu_pool_size": 36864, 00:05:14.389 "immediate_data_pool_size": 16384, 00:05:14.389 "data_out_pool_size": 2048 00:05:14.389 } 00:05:14.389 } 00:05:14.389 ] 00:05:14.389 } 00:05:14.389 ] 00:05:14.389 } 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 859708 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 859708 ']' 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 859708 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859708 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859708' 00:05:14.389 killing process with pid 859708 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 859708 00:05:14.389 00:16:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 859708 00:05:14.651 00:16:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=860020 00:05:14.651 00:16:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:14.651 00:16:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 860020 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 860020 ']' 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 860020 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860020 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860020' 00:05:19.942 killing process with pid 860020 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 860020 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 860020 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:19.942 00:05:19.942 real 0m6.542s 00:05:19.942 user 0m6.396s 00:05:19.942 sys 0m0.537s 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.942 ************************************ 00:05:19.942 END TEST skip_rpc_with_json 00:05:19.942 ************************************ 00:05:19.942 00:16:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:19.942 00:16:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:19.942 00:16:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.942 00:16:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.942 00:16:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.942 ************************************ 00:05:19.942 START TEST skip_rpc_with_delay 00:05:19.942 ************************************ 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.942 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.943 [2024-07-16 00:16:33.494941] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:19.943 [2024-07-16 00:16:33.495029] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.943 00:05:19.943 real 0m0.073s 00:05:19.943 user 0m0.043s 00:05:19.943 sys 0m0.030s 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.943 00:16:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:19.943 ************************************ 00:05:19.943 END TEST skip_rpc_with_delay 00:05:19.943 ************************************ 00:05:19.943 00:16:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:19.943 00:16:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:19.943 00:16:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:19.943 00:16:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:19.943 00:16:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.943 00:16:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.943 00:16:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.204 ************************************ 00:05:20.204 START TEST exit_on_failed_rpc_init 00:05:20.204 ************************************ 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=861173 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 861173 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 861173 ']' 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.204 00:16:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.204 [2024-07-16 00:16:33.657835] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:20.204 [2024-07-16 00:16:33.657898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861173 ] 00:05:20.204 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.204 [2024-07-16 00:16:33.731900] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.204 [2024-07-16 00:16:33.807818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.148 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.149 [2024-07-16 00:16:34.503285] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:21.149 [2024-07-16 00:16:34.503336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861414 ] 00:05:21.149 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.149 [2024-07-16 00:16:34.587281] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.149 [2024-07-16 00:16:34.651943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.149 [2024-07-16 00:16:34.652004] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:21.149 [2024-07-16 00:16:34.652013] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:21.149 [2024-07-16 00:16:34.652020] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 861173 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 861173 ']' 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 861173 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861173 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861173' 00:05:21.149 killing process with pid 861173 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 861173 00:05:21.149 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 861173 00:05:21.410 00:05:21.410 real 0m1.378s 00:05:21.410 user 0m1.604s 00:05:21.410 sys 0m0.401s 00:05:21.410 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.410 00:16:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.410 ************************************ 00:05:21.410 END TEST exit_on_failed_rpc_init 00:05:21.410 ************************************ 00:05:21.410 00:16:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.410 00:16:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.410 00:05:21.410 real 0m13.687s 00:05:21.410 user 0m13.258s 00:05:21.410 sys 0m1.504s 00:05:21.410 00:16:35 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.410 00:16:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.410 ************************************ 00:05:21.410 END TEST skip_rpc 00:05:21.410 ************************************ 00:05:21.672 00:16:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.672 00:16:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:21.672 00:16:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.672 00:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.672 00:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.672 ************************************ 00:05:21.672 START TEST rpc_client 00:05:21.672 ************************************ 00:05:21.672 00:16:35 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:21.672 * Looking for test storage... 00:05:21.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:21.672 00:16:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:21.672 OK 00:05:21.672 00:16:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:21.672 00:05:21.672 real 0m0.126s 00:05:21.672 user 0m0.050s 00:05:21.672 sys 0m0.083s 00:05:21.672 00:16:35 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.672 00:16:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:21.672 ************************************ 00:05:21.672 END TEST rpc_client 00:05:21.672 ************************************ 00:05:21.672 00:16:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.672 00:16:35 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:21.672 00:16:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.672 00:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.672 00:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.672 ************************************ 00:05:21.672 START TEST json_config 00:05:21.672 ************************************ 00:05:21.672 00:16:35 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:21.933 00:16:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.933 00:16:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.933 00:16:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.933 00:16:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.933 00:16:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.933 00:16:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.933 00:16:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.933 00:16:35 json_config -- paths/export.sh@5 -- # export PATH 00:05:21.933 00:16:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@47 -- # : 0 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:21.933 00:16:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:21.933 00:16:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.933 00:16:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:21.933 00:16:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:21.933 00:16:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:21.933 00:16:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:21.933 00:16:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:21.934 INFO: JSON configuration test init 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.934 00:16:35 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:21.934 00:16:35 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.934 00:16:35 json_config -- json_config/common.sh@10 -- # shift 00:05:21.934 00:16:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.934 00:16:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.934 00:16:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.934 00:16:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.934 00:16:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.934 00:16:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=861693 00:05:21.934 00:16:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.934 Waiting for target to run... 00:05:21.934 00:16:35 json_config -- json_config/common.sh@25 -- # waitforlisten 861693 /var/tmp/spdk_tgt.sock 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 861693 ']' 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.934 00:16:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.934 00:16:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.934 [2024-07-16 00:16:35.478150] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:21.934 [2024-07-16 00:16:35.478240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861693 ] 00:05:21.934 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.195 [2024-07-16 00:16:35.745563] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.195 [2024-07-16 00:16:35.795753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.766 00:16:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.766 00:16:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:22.766 00:16:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.766 00:05:22.766 00:16:36 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:22.766 00:16:36 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:22.766 00:16:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.766 00:16:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.766 00:16:36 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:22.766 00:16:36 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:22.766 00:16:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.766 00:16:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.766 00:16:36 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:22.766 00:16:36 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:22.766 00:16:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:23.338 00:16:36 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:23.338 00:16:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:23.338 00:16:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.338 00:16:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.338 00:16:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:23.338 00:16:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:23.338 00:16:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:23.338 00:16:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:23.338 00:16:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:23.338 00:16:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:23.598 00:16:36 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:23.598 00:16:36 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:23.598 00:16:36 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:23.599 00:16:36 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:23.599 00:16:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.599 00:16:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:23.599 00:16:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.599 00:16:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:23.599 00:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:23.599 MallocForNvmf0 00:05:23.599 00:16:37 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:23.599 00:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:23.858 MallocForNvmf1 00:05:23.858 00:16:37 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:23.858 00:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:23.859 [2024-07-16 00:16:37.479296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.118 00:16:37 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.118 00:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.118 00:16:37 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.118 00:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.378 00:16:37 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.378 00:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.378 00:16:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:24.378 00:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:24.637 [2024-07-16 00:16:38.073225] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.637 00:16:38 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:24.637 00:16:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.637 00:16:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.637 00:16:38 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:24.637 00:16:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.637 00:16:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.637 00:16:38 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:24.637 00:16:38 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.637 00:16:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.897 MallocBdevForConfigChangeCheck 00:05:24.897 00:16:38 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:24.897 00:16:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.897 00:16:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.897 00:16:38 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:24.897 00:16:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.157 00:16:38 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:25.157 INFO: shutting down applications... 00:05:25.157 00:16:38 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:25.157 00:16:38 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:25.157 00:16:38 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:25.157 00:16:38 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:25.725 Calling clear_iscsi_subsystem 00:05:25.725 Calling clear_nvmf_subsystem 00:05:25.725 Calling clear_nbd_subsystem 00:05:25.725 Calling clear_ublk_subsystem 00:05:25.725 Calling clear_vhost_blk_subsystem 00:05:25.725 Calling clear_vhost_scsi_subsystem 00:05:25.725 Calling clear_bdev_subsystem 00:05:25.725 00:16:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:25.725 00:16:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:25.725 00:16:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:25.725 00:16:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.725 00:16:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:25.725 00:16:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:25.985 00:16:39 json_config -- json_config/json_config.sh@345 -- # break 00:05:25.985 00:16:39 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:25.985 00:16:39 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:25.985 00:16:39 json_config -- json_config/common.sh@31 -- # local app=target 00:05:25.985 00:16:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.985 00:16:39 json_config -- json_config/common.sh@35 -- # [[ -n 861693 ]] 00:05:25.985 00:16:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 861693 00:05:25.985 00:16:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.985 00:16:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.985 00:16:39 json_config -- json_config/common.sh@41 -- # kill -0 861693 00:05:25.985 00:16:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.555 00:16:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.555 00:16:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.556 00:16:39 json_config -- json_config/common.sh@41 -- # kill -0 861693 00:05:26.556 00:16:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.556 00:16:39 json_config -- json_config/common.sh@43 -- # break 00:05:26.556 00:16:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.556 00:16:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.556 SPDK target shutdown done 00:05:26.556 00:16:39 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:26.556 INFO: relaunching applications... 00:05:26.556 00:16:39 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.556 00:16:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.556 00:16:39 json_config -- json_config/common.sh@10 -- # shift 00:05:26.556 00:16:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.556 00:16:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.556 00:16:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.556 00:16:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.556 00:16:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.556 00:16:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=862666 00:05:26.556 00:16:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.556 Waiting for target to run... 00:05:26.556 00:16:39 json_config -- json_config/common.sh@25 -- # waitforlisten 862666 /var/tmp/spdk_tgt.sock 00:05:26.556 00:16:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.556 00:16:39 json_config -- common/autotest_common.sh@829 -- # '[' -z 862666 ']' 00:05:26.556 00:16:39 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.556 00:16:39 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.556 00:16:39 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.556 00:16:39 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.556 00:16:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.556 [2024-07-16 00:16:39.963988] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:26.556 [2024-07-16 00:16:39.964045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862666 ] 00:05:26.556 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.815 [2024-07-16 00:16:40.365830] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.815 [2024-07-16 00:16:40.427714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.386 [2024-07-16 00:16:40.929380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.386 [2024-07-16 00:16:40.961732] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:27.386 00:16:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.386 00:16:40 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:27.386 00:16:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.386 00:05:27.386 00:16:40 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:27.386 00:16:40 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:27.386 INFO: Checking if target configuration is the same... 00:05:27.386 00:16:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.386 00:16:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:27.386 00:16:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.386 + '[' 2 -ne 2 ']' 00:05:27.386 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.386 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.647 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.647 +++ basename /dev/fd/62 00:05:27.647 ++ mktemp /tmp/62.XXX 00:05:27.647 + tmp_file_1=/tmp/62.xHE 00:05:27.647 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.647 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.647 + tmp_file_2=/tmp/spdk_tgt_config.json.cZH 00:05:27.647 + ret=0 00:05:27.647 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.908 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.908 + diff -u /tmp/62.xHE /tmp/spdk_tgt_config.json.cZH 00:05:27.908 + echo 'INFO: JSON config files are the same' 00:05:27.908 INFO: JSON config files are the same 00:05:27.908 + rm /tmp/62.xHE /tmp/spdk_tgt_config.json.cZH 00:05:27.908 + exit 0 00:05:27.908 00:16:41 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:27.908 00:16:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:27.908 INFO: changing configuration and checking if this can be detected... 00:05:27.908 00:16:41 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.908 00:16:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.908 00:16:41 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.908 00:16:41 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:27.908 00:16:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.908 + '[' 2 -ne 2 ']' 00:05:27.908 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.908 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.908 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.908 +++ basename /dev/fd/62 00:05:27.908 ++ mktemp /tmp/62.XXX 00:05:27.908 + tmp_file_1=/tmp/62.pdS 00:05:28.169 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.169 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.169 + tmp_file_2=/tmp/spdk_tgt_config.json.b7h 00:05:28.169 + ret=0 00:05:28.169 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.430 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.430 + diff -u /tmp/62.pdS /tmp/spdk_tgt_config.json.b7h 00:05:28.430 + ret=1 00:05:28.430 + echo '=== Start of file: /tmp/62.pdS ===' 00:05:28.430 + cat /tmp/62.pdS 00:05:28.430 + echo '=== End of file: /tmp/62.pdS ===' 00:05:28.430 + echo '' 00:05:28.430 + echo '=== Start of file: /tmp/spdk_tgt_config.json.b7h ===' 00:05:28.430 + cat /tmp/spdk_tgt_config.json.b7h 00:05:28.430 + echo '=== End of file: /tmp/spdk_tgt_config.json.b7h ===' 00:05:28.430 + echo '' 00:05:28.430 + rm /tmp/62.pdS /tmp/spdk_tgt_config.json.b7h 00:05:28.430 + exit 1 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:28.430 INFO: configuration change detected. 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@317 -- # [[ -n 862666 ]] 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.430 00:16:41 json_config -- json_config/json_config.sh@323 -- # killprocess 862666 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@948 -- # '[' -z 862666 ']' 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@952 -- # kill -0 862666 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@953 -- # uname 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862666 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862666' 00:05:28.430 killing process with pid 862666 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@967 -- # kill 862666 00:05:28.430 00:16:41 json_config -- common/autotest_common.sh@972 -- # wait 862666 00:05:28.691 00:16:42 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.691 00:16:42 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:28.691 00:16:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.691 00:16:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.691 00:16:42 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:28.691 00:16:42 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:28.691 INFO: Success 00:05:28.691 00:05:28.691 real 0m7.010s 00:05:28.691 user 0m8.302s 00:05:28.691 sys 0m1.816s 00:05:28.691 00:16:42 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.691 00:16:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.691 ************************************ 00:05:28.691 END TEST json_config 00:05:28.691 ************************************ 00:05:28.954 00:16:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.954 00:16:42 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:28.954 00:16:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.954 00:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.954 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.954 ************************************ 00:05:28.954 START TEST json_config_extra_key 00:05:28.954 ************************************ 00:05:28.954 00:16:42 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.954 00:16:42 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.954 00:16:42 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.954 00:16:42 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.954 00:16:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.954 00:16:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.954 00:16:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.954 00:16:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:28.954 00:16:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.954 00:16:42 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:28.954 INFO: launching applications... 00:05:28.954 00:16:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=863438 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.954 Waiting for target to run... 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 863438 /var/tmp/spdk_tgt.sock 00:05:28.954 00:16:42 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 863438 ']' 00:05:28.954 00:16:42 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.954 00:16:42 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:28.954 00:16:42 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.954 00:16:42 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.954 00:16:42 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.954 00:16:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.954 [2024-07-16 00:16:42.531177] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:28.954 [2024-07-16 00:16:42.531260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863438 ] 00:05:28.954 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.215 [2024-07-16 00:16:42.795668] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.476 [2024-07-16 00:16:42.848379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.737 00:16:43 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.737 00:16:43 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:29.737 00:05:29.737 00:16:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:29.737 INFO: shutting down applications... 00:05:29.737 00:16:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 863438 ]] 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 863438 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 863438 00:05:29.737 00:16:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.308 00:16:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.308 00:16:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.308 00:16:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 863438 00:05:30.308 00:16:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.308 00:16:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:30.308 00:16:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.308 00:16:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.308 SPDK target shutdown done 00:05:30.308 00:16:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:30.308 Success 00:05:30.308 00:05:30.308 real 0m1.426s 00:05:30.308 user 0m1.075s 00:05:30.308 sys 0m0.374s 00:05:30.308 00:16:43 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.308 00:16:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.308 ************************************ 00:05:30.308 END TEST json_config_extra_key 00:05:30.308 ************************************ 00:05:30.308 00:16:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.308 00:16:43 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.308 00:16:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.308 00:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.308 00:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:30.308 ************************************ 00:05:30.308 START TEST alias_rpc 00:05:30.308 ************************************ 00:05:30.308 00:16:43 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.569 * Looking for test storage... 00:05:30.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:30.569 00:16:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.569 00:16:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=863742 00:05:30.569 00:16:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 863742 00:05:30.569 00:16:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.569 00:16:43 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 863742 ']' 00:05:30.569 00:16:43 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.569 00:16:43 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.569 00:16:43 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.569 00:16:43 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.569 00:16:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.569 [2024-07-16 00:16:44.041197] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:30.569 [2024-07-16 00:16:44.041284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863742 ] 00:05:30.569 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.569 [2024-07-16 00:16:44.112005] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.569 [2024-07-16 00:16:44.186777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.511 00:16:44 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.511 00:16:44 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:31.511 00:16:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:31.511 00:16:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 863742 00:05:31.511 00:16:44 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 863742 ']' 00:05:31.511 00:16:44 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 863742 00:05:31.511 00:16:44 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:31.511 00:16:44 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.511 00:16:44 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 863742 00:05:31.511 00:16:45 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.511 00:16:45 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.511 00:16:45 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 863742' 00:05:31.511 killing process with pid 863742 00:05:31.511 00:16:45 alias_rpc -- common/autotest_common.sh@967 -- # kill 863742 00:05:31.511 00:16:45 alias_rpc -- common/autotest_common.sh@972 -- # wait 863742 00:05:31.771 00:05:31.771 real 0m1.349s 00:05:31.771 user 0m1.458s 00:05:31.771 sys 0m0.372s 00:05:31.771 00:16:45 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.771 00:16:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.771 ************************************ 00:05:31.771 END TEST alias_rpc 00:05:31.771 ************************************ 00:05:31.771 00:16:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.771 00:16:45 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:31.771 00:16:45 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.771 00:16:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.771 00:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.771 00:16:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.771 ************************************ 00:05:31.771 START TEST spdkcli_tcp 00:05:31.771 ************************************ 00:05:31.771 00:16:45 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.771 * Looking for test storage... 00:05:31.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:31.771 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:31.771 00:16:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:31.771 00:16:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:32.032 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:32.032 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:32.032 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:32.032 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.032 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=864005 00:05:32.032 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 864005 00:05:32.032 00:16:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 864005 ']' 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.032 00:16:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.032 [2024-07-16 00:16:45.469399] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:32.032 [2024-07-16 00:16:45.469474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864005 ] 00:05:32.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.032 [2024-07-16 00:16:45.544356] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.032 [2024-07-16 00:16:45.620558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.032 [2024-07-16 00:16:45.620653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.604 00:16:46 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.604 00:16:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:32.604 00:16:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=864228 00:05:32.604 00:16:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:32.604 00:16:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:32.866 [ 00:05:32.866 "bdev_malloc_delete", 00:05:32.866 "bdev_malloc_create", 00:05:32.866 "bdev_null_resize", 00:05:32.866 "bdev_null_delete", 00:05:32.866 "bdev_null_create", 00:05:32.866 "bdev_nvme_cuse_unregister", 00:05:32.866 "bdev_nvme_cuse_register", 00:05:32.866 "bdev_opal_new_user", 00:05:32.866 "bdev_opal_set_lock_state", 00:05:32.866 "bdev_opal_delete", 00:05:32.866 "bdev_opal_get_info", 00:05:32.866 "bdev_opal_create", 00:05:32.866 "bdev_nvme_opal_revert", 00:05:32.866 "bdev_nvme_opal_init", 00:05:32.866 "bdev_nvme_send_cmd", 00:05:32.866 "bdev_nvme_get_path_iostat", 00:05:32.866 "bdev_nvme_get_mdns_discovery_info", 00:05:32.866 "bdev_nvme_stop_mdns_discovery", 00:05:32.866 "bdev_nvme_start_mdns_discovery", 00:05:32.866 "bdev_nvme_set_multipath_policy", 00:05:32.866 "bdev_nvme_set_preferred_path", 00:05:32.866 "bdev_nvme_get_io_paths", 00:05:32.866 "bdev_nvme_remove_error_injection", 00:05:32.866 "bdev_nvme_add_error_injection", 00:05:32.866 "bdev_nvme_get_discovery_info", 00:05:32.866 "bdev_nvme_stop_discovery", 00:05:32.866 "bdev_nvme_start_discovery", 00:05:32.866 "bdev_nvme_get_controller_health_info", 00:05:32.866 "bdev_nvme_disable_controller", 00:05:32.866 "bdev_nvme_enable_controller", 00:05:32.866 "bdev_nvme_reset_controller", 00:05:32.866 "bdev_nvme_get_transport_statistics", 00:05:32.866 "bdev_nvme_apply_firmware", 00:05:32.866 "bdev_nvme_detach_controller", 00:05:32.866 "bdev_nvme_get_controllers", 00:05:32.866 "bdev_nvme_attach_controller", 00:05:32.866 "bdev_nvme_set_hotplug", 00:05:32.866 "bdev_nvme_set_options", 00:05:32.866 "bdev_passthru_delete", 00:05:32.866 "bdev_passthru_create", 00:05:32.866 "bdev_lvol_set_parent_bdev", 00:05:32.866 "bdev_lvol_set_parent", 00:05:32.866 "bdev_lvol_check_shallow_copy", 00:05:32.866 "bdev_lvol_start_shallow_copy", 00:05:32.866 "bdev_lvol_grow_lvstore", 00:05:32.866 "bdev_lvol_get_lvols", 00:05:32.866 "bdev_lvol_get_lvstores", 00:05:32.866 "bdev_lvol_delete", 00:05:32.866 "bdev_lvol_set_read_only", 00:05:32.866 "bdev_lvol_resize", 00:05:32.866 "bdev_lvol_decouple_parent", 00:05:32.866 "bdev_lvol_inflate", 00:05:32.866 "bdev_lvol_rename", 00:05:32.866 "bdev_lvol_clone_bdev", 00:05:32.866 "bdev_lvol_clone", 00:05:32.866 "bdev_lvol_snapshot", 00:05:32.866 "bdev_lvol_create", 00:05:32.866 "bdev_lvol_delete_lvstore", 00:05:32.866 "bdev_lvol_rename_lvstore", 00:05:32.866 "bdev_lvol_create_lvstore", 00:05:32.866 "bdev_raid_set_options", 00:05:32.866 "bdev_raid_remove_base_bdev", 00:05:32.866 "bdev_raid_add_base_bdev", 00:05:32.866 "bdev_raid_delete", 00:05:32.866 "bdev_raid_create", 00:05:32.866 "bdev_raid_get_bdevs", 00:05:32.866 "bdev_error_inject_error", 00:05:32.866 "bdev_error_delete", 00:05:32.866 "bdev_error_create", 00:05:32.866 "bdev_split_delete", 00:05:32.866 "bdev_split_create", 00:05:32.866 "bdev_delay_delete", 00:05:32.866 "bdev_delay_create", 00:05:32.866 "bdev_delay_update_latency", 00:05:32.866 "bdev_zone_block_delete", 00:05:32.866 "bdev_zone_block_create", 00:05:32.866 "blobfs_create", 00:05:32.866 "blobfs_detect", 00:05:32.866 "blobfs_set_cache_size", 00:05:32.866 "bdev_aio_delete", 00:05:32.866 "bdev_aio_rescan", 00:05:32.866 "bdev_aio_create", 00:05:32.866 "bdev_ftl_set_property", 00:05:32.866 "bdev_ftl_get_properties", 00:05:32.866 "bdev_ftl_get_stats", 00:05:32.866 "bdev_ftl_unmap", 00:05:32.866 "bdev_ftl_unload", 00:05:32.866 "bdev_ftl_delete", 00:05:32.866 "bdev_ftl_load", 00:05:32.866 "bdev_ftl_create", 00:05:32.866 "bdev_virtio_attach_controller", 00:05:32.866 "bdev_virtio_scsi_get_devices", 00:05:32.866 "bdev_virtio_detach_controller", 00:05:32.866 "bdev_virtio_blk_set_hotplug", 00:05:32.866 "bdev_iscsi_delete", 00:05:32.866 "bdev_iscsi_create", 00:05:32.866 "bdev_iscsi_set_options", 00:05:32.866 "accel_error_inject_error", 00:05:32.866 "ioat_scan_accel_module", 00:05:32.866 "dsa_scan_accel_module", 00:05:32.866 "iaa_scan_accel_module", 00:05:32.866 "vfu_virtio_create_scsi_endpoint", 00:05:32.866 "vfu_virtio_scsi_remove_target", 00:05:32.866 "vfu_virtio_scsi_add_target", 00:05:32.866 "vfu_virtio_create_blk_endpoint", 00:05:32.866 "vfu_virtio_delete_endpoint", 00:05:32.866 "keyring_file_remove_key", 00:05:32.866 "keyring_file_add_key", 00:05:32.866 "keyring_linux_set_options", 00:05:32.866 "iscsi_get_histogram", 00:05:32.866 "iscsi_enable_histogram", 00:05:32.866 "iscsi_set_options", 00:05:32.866 "iscsi_get_auth_groups", 00:05:32.866 "iscsi_auth_group_remove_secret", 00:05:32.866 "iscsi_auth_group_add_secret", 00:05:32.866 "iscsi_delete_auth_group", 00:05:32.866 "iscsi_create_auth_group", 00:05:32.866 "iscsi_set_discovery_auth", 00:05:32.866 "iscsi_get_options", 00:05:32.866 "iscsi_target_node_request_logout", 00:05:32.866 "iscsi_target_node_set_redirect", 00:05:32.866 "iscsi_target_node_set_auth", 00:05:32.866 "iscsi_target_node_add_lun", 00:05:32.866 "iscsi_get_stats", 00:05:32.866 "iscsi_get_connections", 00:05:32.866 "iscsi_portal_group_set_auth", 00:05:32.866 "iscsi_start_portal_group", 00:05:32.866 "iscsi_delete_portal_group", 00:05:32.866 "iscsi_create_portal_group", 00:05:32.866 "iscsi_get_portal_groups", 00:05:32.866 "iscsi_delete_target_node", 00:05:32.866 "iscsi_target_node_remove_pg_ig_maps", 00:05:32.866 "iscsi_target_node_add_pg_ig_maps", 00:05:32.866 "iscsi_create_target_node", 00:05:32.866 "iscsi_get_target_nodes", 00:05:32.866 "iscsi_delete_initiator_group", 00:05:32.866 "iscsi_initiator_group_remove_initiators", 00:05:32.866 "iscsi_initiator_group_add_initiators", 00:05:32.866 "iscsi_create_initiator_group", 00:05:32.866 "iscsi_get_initiator_groups", 00:05:32.866 "nvmf_set_crdt", 00:05:32.866 "nvmf_set_config", 00:05:32.866 "nvmf_set_max_subsystems", 00:05:32.866 "nvmf_stop_mdns_prr", 00:05:32.866 "nvmf_publish_mdns_prr", 00:05:32.866 "nvmf_subsystem_get_listeners", 00:05:32.866 "nvmf_subsystem_get_qpairs", 00:05:32.866 "nvmf_subsystem_get_controllers", 00:05:32.866 "nvmf_get_stats", 00:05:32.866 "nvmf_get_transports", 00:05:32.866 "nvmf_create_transport", 00:05:32.866 "nvmf_get_targets", 00:05:32.866 "nvmf_delete_target", 00:05:32.866 "nvmf_create_target", 00:05:32.866 "nvmf_subsystem_allow_any_host", 00:05:32.866 "nvmf_subsystem_remove_host", 00:05:32.866 "nvmf_subsystem_add_host", 00:05:32.866 "nvmf_ns_remove_host", 00:05:32.866 "nvmf_ns_add_host", 00:05:32.867 "nvmf_subsystem_remove_ns", 00:05:32.867 "nvmf_subsystem_add_ns", 00:05:32.867 "nvmf_subsystem_listener_set_ana_state", 00:05:32.867 "nvmf_discovery_get_referrals", 00:05:32.867 "nvmf_discovery_remove_referral", 00:05:32.867 "nvmf_discovery_add_referral", 00:05:32.867 "nvmf_subsystem_remove_listener", 00:05:32.867 "nvmf_subsystem_add_listener", 00:05:32.867 "nvmf_delete_subsystem", 00:05:32.867 "nvmf_create_subsystem", 00:05:32.867 "nvmf_get_subsystems", 00:05:32.867 "env_dpdk_get_mem_stats", 00:05:32.867 "nbd_get_disks", 00:05:32.867 "nbd_stop_disk", 00:05:32.867 "nbd_start_disk", 00:05:32.867 "ublk_recover_disk", 00:05:32.867 "ublk_get_disks", 00:05:32.867 "ublk_stop_disk", 00:05:32.867 "ublk_start_disk", 00:05:32.867 "ublk_destroy_target", 00:05:32.867 "ublk_create_target", 00:05:32.867 "virtio_blk_create_transport", 00:05:32.867 "virtio_blk_get_transports", 00:05:32.867 "vhost_controller_set_coalescing", 00:05:32.867 "vhost_get_controllers", 00:05:32.867 "vhost_delete_controller", 00:05:32.867 "vhost_create_blk_controller", 00:05:32.867 "vhost_scsi_controller_remove_target", 00:05:32.867 "vhost_scsi_controller_add_target", 00:05:32.867 "vhost_start_scsi_controller", 00:05:32.867 "vhost_create_scsi_controller", 00:05:32.867 "thread_set_cpumask", 00:05:32.867 "framework_get_governor", 00:05:32.867 "framework_get_scheduler", 00:05:32.867 "framework_set_scheduler", 00:05:32.867 "framework_get_reactors", 00:05:32.867 "thread_get_io_channels", 00:05:32.867 "thread_get_pollers", 00:05:32.867 "thread_get_stats", 00:05:32.867 "framework_monitor_context_switch", 00:05:32.867 "spdk_kill_instance", 00:05:32.867 "log_enable_timestamps", 00:05:32.867 "log_get_flags", 00:05:32.867 "log_clear_flag", 00:05:32.867 "log_set_flag", 00:05:32.867 "log_get_level", 00:05:32.867 "log_set_level", 00:05:32.867 "log_get_print_level", 00:05:32.867 "log_set_print_level", 00:05:32.867 "framework_enable_cpumask_locks", 00:05:32.867 "framework_disable_cpumask_locks", 00:05:32.867 "framework_wait_init", 00:05:32.867 "framework_start_init", 00:05:32.867 "scsi_get_devices", 00:05:32.867 "bdev_get_histogram", 00:05:32.867 "bdev_enable_histogram", 00:05:32.867 "bdev_set_qos_limit", 00:05:32.867 "bdev_set_qd_sampling_period", 00:05:32.867 "bdev_get_bdevs", 00:05:32.867 "bdev_reset_iostat", 00:05:32.867 "bdev_get_iostat", 00:05:32.867 "bdev_examine", 00:05:32.867 "bdev_wait_for_examine", 00:05:32.867 "bdev_set_options", 00:05:32.867 "notify_get_notifications", 00:05:32.867 "notify_get_types", 00:05:32.867 "accel_get_stats", 00:05:32.867 "accel_set_options", 00:05:32.867 "accel_set_driver", 00:05:32.867 "accel_crypto_key_destroy", 00:05:32.867 "accel_crypto_keys_get", 00:05:32.867 "accel_crypto_key_create", 00:05:32.867 "accel_assign_opc", 00:05:32.867 "accel_get_module_info", 00:05:32.867 "accel_get_opc_assignments", 00:05:32.867 "vmd_rescan", 00:05:32.867 "vmd_remove_device", 00:05:32.867 "vmd_enable", 00:05:32.867 "sock_get_default_impl", 00:05:32.867 "sock_set_default_impl", 00:05:32.867 "sock_impl_set_options", 00:05:32.867 "sock_impl_get_options", 00:05:32.867 "iobuf_get_stats", 00:05:32.867 "iobuf_set_options", 00:05:32.867 "keyring_get_keys", 00:05:32.867 "framework_get_pci_devices", 00:05:32.867 "framework_get_config", 00:05:32.867 "framework_get_subsystems", 00:05:32.867 "vfu_tgt_set_base_path", 00:05:32.867 "trace_get_info", 00:05:32.867 "trace_get_tpoint_group_mask", 00:05:32.867 "trace_disable_tpoint_group", 00:05:32.867 "trace_enable_tpoint_group", 00:05:32.867 "trace_clear_tpoint_mask", 00:05:32.867 "trace_set_tpoint_mask", 00:05:32.867 "spdk_get_version", 00:05:32.867 "rpc_get_methods" 00:05:32.867 ] 00:05:32.867 00:16:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.867 00:16:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:32.867 00:16:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 864005 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 864005 ']' 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 864005 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864005 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864005' 00:05:32.867 killing process with pid 864005 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 864005 00:05:32.867 00:16:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 864005 00:05:33.127 00:05:33.127 real 0m1.404s 00:05:33.127 user 0m2.542s 00:05:33.127 sys 0m0.438s 00:05:33.127 00:16:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.127 00:16:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.127 ************************************ 00:05:33.127 END TEST spdkcli_tcp 00:05:33.127 ************************************ 00:05:33.127 00:16:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.127 00:16:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:33.127 00:16:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.127 00:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.128 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 ************************************ 00:05:33.388 START TEST dpdk_mem_utility 00:05:33.388 ************************************ 00:05:33.388 00:16:46 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:33.388 * Looking for test storage... 00:05:33.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:33.388 00:16:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:33.388 00:16:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=864311 00:05:33.388 00:16:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 864311 00:05:33.388 00:16:46 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 864311 ']' 00:05:33.388 00:16:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.388 00:16:46 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.388 00:16:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.388 00:16:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.388 00:16:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 00:16:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.388 [2024-07-16 00:16:46.912540] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:33.388 [2024-07-16 00:16:46.912601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864311 ] 00:05:33.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.388 [2024-07-16 00:16:46.981256] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.648 [2024-07-16 00:16:47.051051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.239 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.239 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:34.239 00:16:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:34.239 00:16:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:34.239 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.239 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.239 { 00:05:34.239 "filename": "/tmp/spdk_mem_dump.txt" 00:05:34.239 } 00:05:34.239 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.239 00:16:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.239 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:34.239 1 heaps totaling size 814.000000 MiB 00:05:34.239 size: 814.000000 MiB heap id: 0 00:05:34.239 end heaps---------- 00:05:34.239 8 mempools totaling size 598.116089 MiB 00:05:34.239 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:34.239 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:34.239 size: 84.521057 MiB name: bdev_io_864311 00:05:34.239 size: 51.011292 MiB name: evtpool_864311 00:05:34.239 size: 50.003479 MiB name: msgpool_864311 00:05:34.239 size: 21.763794 MiB name: PDU_Pool 00:05:34.239 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:34.239 size: 0.026123 MiB name: Session_Pool 00:05:34.239 end mempools------- 00:05:34.239 6 memzones totaling size 4.142822 MiB 00:05:34.239 size: 1.000366 MiB name: RG_ring_0_864311 00:05:34.239 size: 1.000366 MiB name: RG_ring_1_864311 00:05:34.239 size: 1.000366 MiB name: RG_ring_4_864311 00:05:34.239 size: 1.000366 MiB name: RG_ring_5_864311 00:05:34.239 size: 0.125366 MiB name: RG_ring_2_864311 00:05:34.239 size: 0.015991 MiB name: RG_ring_3_864311 00:05:34.239 end memzones------- 00:05:34.239 00:16:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:34.239 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:34.239 list of free elements. size: 12.519348 MiB 00:05:34.239 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:34.239 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:34.239 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:34.239 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:34.239 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:34.239 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:34.239 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:34.239 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:34.239 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:34.239 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:34.239 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:34.239 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:34.239 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:34.239 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:34.239 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:34.239 list of standard malloc elements. size: 199.218079 MiB 00:05:34.239 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:34.239 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:34.239 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:34.239 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:34.239 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:34.239 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:34.239 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:34.239 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:34.239 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:34.239 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:34.239 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:34.239 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:34.239 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:34.239 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:34.239 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:34.239 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:34.239 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:34.239 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:34.239 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:34.239 list of memzone associated elements. size: 602.262573 MiB 00:05:34.239 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:34.239 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:34.239 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:34.239 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:34.239 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:34.239 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_864311_0 00:05:34.239 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:34.239 associated memzone info: size: 48.002930 MiB name: MP_evtpool_864311_0 00:05:34.239 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:34.239 associated memzone info: size: 48.002930 MiB name: MP_msgpool_864311_0 00:05:34.239 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:34.239 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:34.239 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:34.239 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:34.239 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:34.239 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_864311 00:05:34.239 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:34.239 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_864311 00:05:34.239 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:34.239 associated memzone info: size: 1.007996 MiB name: MP_evtpool_864311 00:05:34.239 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:34.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:34.239 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:34.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:34.239 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:34.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:34.239 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:34.239 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:34.239 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:34.239 associated memzone info: size: 1.000366 MiB name: RG_ring_0_864311 00:05:34.239 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:34.239 associated memzone info: size: 1.000366 MiB name: RG_ring_1_864311 00:05:34.239 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:34.239 associated memzone info: size: 1.000366 MiB name: RG_ring_4_864311 00:05:34.239 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:34.239 associated memzone info: size: 1.000366 MiB name: RG_ring_5_864311 00:05:34.239 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:34.239 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_864311 00:05:34.239 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:34.239 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:34.239 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:34.239 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:34.239 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:34.239 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:34.239 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:34.239 associated memzone info: size: 0.125366 MiB name: RG_ring_2_864311 00:05:34.239 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:34.239 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:34.239 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:34.239 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:34.239 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:34.239 associated memzone info: size: 0.015991 MiB name: RG_ring_3_864311 00:05:34.239 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:34.239 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:34.239 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:34.239 associated memzone info: size: 0.000183 MiB name: MP_msgpool_864311 00:05:34.239 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:34.239 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_864311 00:05:34.239 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:34.239 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:34.239 00:16:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:34.239 00:16:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 864311 00:05:34.239 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 864311 ']' 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 864311 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864311 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864311' 00:05:34.240 killing process with pid 864311 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 864311 00:05:34.240 00:16:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 864311 00:05:34.501 00:05:34.501 real 0m1.255s 00:05:34.501 user 0m1.323s 00:05:34.501 sys 0m0.361s 00:05:34.501 00:16:48 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.501 00:16:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 ************************************ 00:05:34.501 END TEST dpdk_mem_utility 00:05:34.501 ************************************ 00:05:34.501 00:16:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.501 00:16:48 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:34.501 00:16:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.501 00:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.501 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.501 ************************************ 00:05:34.501 START TEST event 00:05:34.501 ************************************ 00:05:34.501 00:16:48 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:34.761 * Looking for test storage... 00:05:34.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:34.761 00:16:48 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:34.761 00:16:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:34.761 00:16:48 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:34.761 00:16:48 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:34.761 00:16:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.761 00:16:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.761 ************************************ 00:05:34.761 START TEST event_perf 00:05:34.761 ************************************ 00:05:34.761 00:16:48 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:34.761 Running I/O for 1 seconds...[2024-07-16 00:16:48.256389] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:34.761 [2024-07-16 00:16:48.256481] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864688 ] 00:05:34.761 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.761 [2024-07-16 00:16:48.327325] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.021 [2024-07-16 00:16:48.397987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.021 [2024-07-16 00:16:48.398106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.021 [2024-07-16 00:16:48.398274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.021 [2024-07-16 00:16:48.398295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.961 Running I/O for 1 seconds... 00:05:35.961 lcore 0: 174635 00:05:35.961 lcore 1: 174631 00:05:35.961 lcore 2: 174631 00:05:35.961 lcore 3: 174634 00:05:35.961 done. 00:05:35.961 00:05:35.961 real 0m1.213s 00:05:35.961 user 0m4.139s 00:05:35.961 sys 0m0.071s 00:05:35.962 00:16:49 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.962 00:16:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.962 ************************************ 00:05:35.962 END TEST event_perf 00:05:35.962 ************************************ 00:05:35.962 00:16:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.962 00:16:49 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:35.962 00:16:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:35.962 00:16:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.962 00:16:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.962 ************************************ 00:05:35.962 START TEST event_reactor 00:05:35.962 ************************************ 00:05:35.962 00:16:49 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:35.962 [2024-07-16 00:16:49.542223] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:35.962 [2024-07-16 00:16:49.542323] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865041 ] 00:05:35.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.222 [2024-07-16 00:16:49.610894] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.222 [2024-07-16 00:16:49.676611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.162 test_start 00:05:37.162 oneshot 00:05:37.162 tick 100 00:05:37.162 tick 100 00:05:37.162 tick 250 00:05:37.162 tick 100 00:05:37.162 tick 100 00:05:37.162 tick 100 00:05:37.162 tick 250 00:05:37.162 tick 500 00:05:37.162 tick 100 00:05:37.162 tick 100 00:05:37.162 tick 250 00:05:37.162 tick 100 00:05:37.162 tick 100 00:05:37.162 test_end 00:05:37.162 00:05:37.162 real 0m1.207s 00:05:37.162 user 0m1.124s 00:05:37.162 sys 0m0.079s 00:05:37.162 00:16:50 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.162 00:16:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:37.162 ************************************ 00:05:37.162 END TEST event_reactor 00:05:37.162 ************************************ 00:05:37.162 00:16:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:37.162 00:16:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:37.162 00:16:50 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:37.162 00:16:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.162 00:16:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.423 ************************************ 00:05:37.423 START TEST event_reactor_perf 00:05:37.423 ************************************ 00:05:37.423 00:16:50 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:37.423 [2024-07-16 00:16:50.819499] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:37.423 [2024-07-16 00:16:50.819602] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865343 ] 00:05:37.423 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.423 [2024-07-16 00:16:50.904218] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.423 [2024-07-16 00:16:50.968703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.810 test_start 00:05:38.810 test_end 00:05:38.810 Performance: 369074 events per second 00:05:38.810 00:05:38.810 real 0m1.221s 00:05:38.810 user 0m1.128s 00:05:38.810 sys 0m0.090s 00:05:38.810 00:16:52 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.810 00:16:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 ************************************ 00:05:38.810 END TEST event_reactor_perf 00:05:38.810 ************************************ 00:05:38.810 00:16:52 event -- common/autotest_common.sh@1142 -- # return 0 00:05:38.810 00:16:52 event -- event/event.sh@49 -- # uname -s 00:05:38.810 00:16:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:38.810 00:16:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:38.810 00:16:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.810 00:16:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.810 00:16:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 ************************************ 00:05:38.810 START TEST event_scheduler 00:05:38.810 ************************************ 00:05:38.810 00:16:52 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:38.810 * Looking for test storage... 00:05:38.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:38.810 00:16:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:38.810 00:16:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=865576 00:05:38.810 00:16:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.810 00:16:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:38.810 00:16:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 865576 00:05:38.810 00:16:52 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 865576 ']' 00:05:38.810 00:16:52 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.810 00:16:52 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.810 00:16:52 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.810 00:16:52 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.810 00:16:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 [2024-07-16 00:16:52.245858] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:38.810 [2024-07-16 00:16:52.245936] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865576 ] 00:05:38.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.810 [2024-07-16 00:16:52.307427] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.810 [2024-07-16 00:16:52.374341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.810 [2024-07-16 00:16:52.374525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.810 [2024-07-16 00:16:52.374668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.810 [2024-07-16 00:16:52.374669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.752 00:16:53 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.752 00:16:53 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:39.752 00:16:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 [2024-07-16 00:16:53.032745] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:39.753 [2024-07-16 00:16:53.032761] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:39.753 [2024-07-16 00:16:53.032768] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:39.753 [2024-07-16 00:16:53.032772] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:39.753 [2024-07-16 00:16:53.032776] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 [2024-07-16 00:16:53.087287] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 ************************************ 00:05:39.753 START TEST scheduler_create_thread 00:05:39.753 ************************************ 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 2 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 3 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 4 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 5 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 6 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 7 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 8 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 9 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.753 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.358 10 00:05:40.358 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.358 00:16:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:40.358 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.358 00:16:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.767 00:16:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.767 00:16:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:41.767 00:16:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:41.767 00:16:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.767 00:16:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.336 00:16:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.336 00:16:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.336 00:16:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.336 00:16:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.283 00:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.283 00:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:43.283 00:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:43.283 00:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.283 00:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.852 00:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.852 00:05:43.852 real 0m4.222s 00:05:43.852 user 0m0.024s 00:05:43.852 sys 0m0.007s 00:05:43.852 00:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.852 00:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.852 ************************************ 00:05:43.852 END TEST scheduler_create_thread 00:05:43.852 ************************************ 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:43.852 00:16:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.852 00:16:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 865576 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 865576 ']' 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 865576 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865576 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865576' 00:05:43.852 killing process with pid 865576 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 865576 00:05:43.852 00:16:57 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 865576 00:05:44.111 [2024-07-16 00:16:57.628450] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:44.371 00:05:44.371 real 0m5.705s 00:05:44.371 user 0m12.735s 00:05:44.371 sys 0m0.348s 00:05:44.371 00:16:57 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.371 00:16:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.371 ************************************ 00:05:44.371 END TEST event_scheduler 00:05:44.371 ************************************ 00:05:44.371 00:16:57 event -- common/autotest_common.sh@1142 -- # return 0 00:05:44.371 00:16:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:44.371 00:16:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:44.371 00:16:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.371 00:16:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.371 00:16:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.371 ************************************ 00:05:44.371 START TEST app_repeat 00:05:44.371 ************************************ 00:05:44.371 00:16:57 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=866844 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 866844' 00:05:44.371 Process app_repeat pid: 866844 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:44.371 spdk_app_start Round 0 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 866844 /var/tmp/spdk-nbd.sock 00:05:44.371 00:16:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 866844 ']' 00:05:44.371 00:16:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.371 00:16:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.371 00:16:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.371 00:16:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.371 00:16:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.371 00:16:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:44.372 [2024-07-16 00:16:57.909693] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:44.372 [2024-07-16 00:16:57.909753] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866844 ] 00:05:44.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.372 [2024-07-16 00:16:57.976479] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.631 [2024-07-16 00:16:58.042335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.631 [2024-07-16 00:16:58.042502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.202 00:16:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.202 00:16:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:45.202 00:16:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.202 Malloc0 00:05:45.463 00:16:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.463 Malloc1 00:05:45.463 00:16:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.463 00:16:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.724 /dev/nbd0 00:05:45.724 00:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.724 00:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.724 1+0 records in 00:05:45.724 1+0 records out 00:05:45.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274519 s, 14.9 MB/s 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.724 00:16:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.724 00:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.724 00:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.724 00:16:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.985 /dev/nbd1 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.985 1+0 records in 00:05:45.985 1+0 records out 00:05:45.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029459 s, 13.9 MB/s 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.985 00:16:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.985 { 00:05:45.985 "nbd_device": "/dev/nbd0", 00:05:45.985 "bdev_name": "Malloc0" 00:05:45.985 }, 00:05:45.985 { 00:05:45.985 "nbd_device": "/dev/nbd1", 00:05:45.985 "bdev_name": "Malloc1" 00:05:45.985 } 00:05:45.985 ]' 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.985 { 00:05:45.985 "nbd_device": "/dev/nbd0", 00:05:45.985 "bdev_name": "Malloc0" 00:05:45.985 }, 00:05:45.985 { 00:05:45.985 "nbd_device": "/dev/nbd1", 00:05:45.985 "bdev_name": "Malloc1" 00:05:45.985 } 00:05:45.985 ]' 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.985 /dev/nbd1' 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.985 /dev/nbd1' 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.985 00:16:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.986 00:16:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.247 256+0 records in 00:05:46.247 256+0 records out 00:05:46.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011498 s, 91.2 MB/s 00:05:46.247 00:16:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.247 00:16:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.247 256+0 records in 00:05:46.247 256+0 records out 00:05:46.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158418 s, 66.2 MB/s 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.248 256+0 records in 00:05:46.248 256+0 records out 00:05:46.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174186 s, 60.2 MB/s 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.248 00:16:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.524 00:17:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.785 00:17:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.785 00:17:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.785 00:17:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.045 [2024-07-16 00:17:00.531114] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.045 [2024-07-16 00:17:00.596140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.045 [2024-07-16 00:17:00.596142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.045 [2024-07-16 00:17:00.627849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.045 [2024-07-16 00:17:00.627884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.345 00:17:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.345 00:17:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:50.345 spdk_app_start Round 1 00:05:50.345 00:17:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 866844 /var/tmp/spdk-nbd.sock 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 866844 ']' 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.345 00:17:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:50.345 00:17:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.345 Malloc0 00:05:50.345 00:17:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.345 Malloc1 00:05:50.345 00:17:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.345 00:17:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.606 /dev/nbd0 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.606 1+0 records in 00:05:50.606 1+0 records out 00:05:50.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183584 s, 22.3 MB/s 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.606 /dev/nbd1 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.606 1+0 records in 00:05:50.606 1+0 records out 00:05:50.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272018 s, 15.1 MB/s 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.606 00:17:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.606 00:17:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.867 { 00:05:50.867 "nbd_device": "/dev/nbd0", 00:05:50.867 "bdev_name": "Malloc0" 00:05:50.867 }, 00:05:50.867 { 00:05:50.867 "nbd_device": "/dev/nbd1", 00:05:50.867 "bdev_name": "Malloc1" 00:05:50.867 } 00:05:50.867 ]' 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.867 { 00:05:50.867 "nbd_device": "/dev/nbd0", 00:05:50.867 "bdev_name": "Malloc0" 00:05:50.867 }, 00:05:50.867 { 00:05:50.867 "nbd_device": "/dev/nbd1", 00:05:50.867 "bdev_name": "Malloc1" 00:05:50.867 } 00:05:50.867 ]' 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.867 /dev/nbd1' 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.867 /dev/nbd1' 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.867 256+0 records in 00:05:50.867 256+0 records out 00:05:50.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113243 s, 92.6 MB/s 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.867 256+0 records in 00:05:50.867 256+0 records out 00:05:50.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162511 s, 64.5 MB/s 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.867 256+0 records in 00:05:50.867 256+0 records out 00:05:50.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01731 s, 60.6 MB/s 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.867 00:17:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.128 00:17:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.129 00:17:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.129 00:17:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.129 00:17:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.129 00:17:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.129 00:17:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.389 00:17:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.650 00:17:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.650 00:17:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.650 00:17:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.910 [2024-07-16 00:17:05.349955] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.910 [2024-07-16 00:17:05.412979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.910 [2024-07-16 00:17:05.412982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.910 [2024-07-16 00:17:05.445681] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.910 [2024-07-16 00:17:05.445718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.211 00:17:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.211 00:17:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:55.211 spdk_app_start Round 2 00:05:55.211 00:17:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 866844 /var/tmp/spdk-nbd.sock 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 866844 ']' 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.211 00:17:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:55.211 00:17:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.212 Malloc0 00:05:55.212 00:17:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.212 Malloc1 00:05:55.212 00:17:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.212 00:17:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.212 /dev/nbd0 00:05:55.473 00:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.473 00:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.473 1+0 records in 00:05:55.473 1+0 records out 00:05:55.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303881 s, 13.5 MB/s 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.473 00:17:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.473 00:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.473 00:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.473 00:17:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.473 /dev/nbd1 00:05:55.473 00:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.473 00:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.473 1+0 records in 00:05:55.473 1+0 records out 00:05:55.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020819 s, 19.7 MB/s 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.473 00:17:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.473 00:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.473 00:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.473 00:17:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.473 00:17:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.473 00:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.734 { 00:05:55.734 "nbd_device": "/dev/nbd0", 00:05:55.734 "bdev_name": "Malloc0" 00:05:55.734 }, 00:05:55.734 { 00:05:55.734 "nbd_device": "/dev/nbd1", 00:05:55.734 "bdev_name": "Malloc1" 00:05:55.734 } 00:05:55.734 ]' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.734 { 00:05:55.734 "nbd_device": "/dev/nbd0", 00:05:55.734 "bdev_name": "Malloc0" 00:05:55.734 }, 00:05:55.734 { 00:05:55.734 "nbd_device": "/dev/nbd1", 00:05:55.734 "bdev_name": "Malloc1" 00:05:55.734 } 00:05:55.734 ]' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.734 /dev/nbd1' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.734 /dev/nbd1' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.734 256+0 records in 00:05:55.734 256+0 records out 00:05:55.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114526 s, 91.6 MB/s 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.734 256+0 records in 00:05:55.734 256+0 records out 00:05:55.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159183 s, 65.9 MB/s 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.734 256+0 records in 00:05:55.734 256+0 records out 00:05:55.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175238 s, 59.8 MB/s 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.734 00:17:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.995 00:17:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.256 00:17:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.256 00:17:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.516 00:17:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.776 [2024-07-16 00:17:10.187427] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.776 [2024-07-16 00:17:10.252005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.776 [2024-07-16 00:17:10.252007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.776 [2024-07-16 00:17:10.284205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.776 [2024-07-16 00:17:10.284250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.077 00:17:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 866844 /var/tmp/spdk-nbd.sock 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 866844 ']' 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.077 00:17:13 event.app_repeat -- event/event.sh@39 -- # killprocess 866844 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 866844 ']' 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 866844 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 866844 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 866844' 00:06:00.077 killing process with pid 866844 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@967 -- # kill 866844 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@972 -- # wait 866844 00:06:00.077 spdk_app_start is called in Round 0. 00:06:00.077 Shutdown signal received, stop current app iteration 00:06:00.077 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:06:00.077 spdk_app_start is called in Round 1. 00:06:00.077 Shutdown signal received, stop current app iteration 00:06:00.077 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:06:00.077 spdk_app_start is called in Round 2. 00:06:00.077 Shutdown signal received, stop current app iteration 00:06:00.077 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:06:00.077 spdk_app_start is called in Round 3. 00:06:00.077 Shutdown signal received, stop current app iteration 00:06:00.077 00:17:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.077 00:17:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:00.077 00:06:00.077 real 0m15.499s 00:06:00.077 user 0m33.508s 00:06:00.077 sys 0m2.083s 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.077 00:17:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.077 ************************************ 00:06:00.077 END TEST app_repeat 00:06:00.077 ************************************ 00:06:00.077 00:17:13 event -- common/autotest_common.sh@1142 -- # return 0 00:06:00.077 00:17:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.077 00:17:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.077 00:17:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.077 00:17:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.077 00:17:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.077 ************************************ 00:06:00.077 START TEST cpu_locks 00:06:00.077 ************************************ 00:06:00.077 00:17:13 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.077 * Looking for test storage... 00:06:00.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.077 00:17:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.077 00:17:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.077 00:17:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.077 00:17:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.077 00:17:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.077 00:17:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.077 00:17:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.077 ************************************ 00:06:00.077 START TEST default_locks 00:06:00.077 ************************************ 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=870101 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 870101 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 870101 ']' 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.077 00:17:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.077 [2024-07-16 00:17:13.646400] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:00.077 [2024-07-16 00:17:13.646462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870101 ] 00:06:00.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.337 [2024-07-16 00:17:13.716351] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.337 [2024-07-16 00:17:13.790634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.908 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.908 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:00.908 00:17:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 870101 00:06:00.908 00:17:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 870101 00:06:00.908 00:17:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.168 lslocks: write error 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 870101 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 870101 ']' 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 870101 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870101 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870101' 00:06:01.168 killing process with pid 870101 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 870101 00:06:01.168 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 870101 00:06:01.429 00:17:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 870101 00:06:01.429 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:01.429 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 870101 00:06:01.429 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:01.429 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.429 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:01.429 00:17:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 870101 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 870101 ']' 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (870101) - No such process 00:06:01.429 ERROR: process (pid: 870101) is no longer running 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.429 00:06:01.429 real 0m1.418s 00:06:01.429 user 0m1.507s 00:06:01.429 sys 0m0.471s 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.429 00:17:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.429 ************************************ 00:06:01.429 END TEST default_locks 00:06:01.429 ************************************ 00:06:01.429 00:17:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.429 00:17:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.429 00:17:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.429 00:17:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.429 00:17:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.690 ************************************ 00:06:01.690 START TEST default_locks_via_rpc 00:06:01.690 ************************************ 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=870465 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 870465 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 870465 ']' 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.690 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.690 [2024-07-16 00:17:15.130661] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:01.690 [2024-07-16 00:17:15.130713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870465 ] 00:06:01.690 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.690 [2024-07-16 00:17:15.198431] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.690 [2024-07-16 00:17:15.268571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.260 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.260 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:02.260 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:02.260 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.260 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 870465 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 870465 00:06:02.520 00:17:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.779 00:17:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 870465 00:06:02.779 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 870465 ']' 00:06:02.779 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 870465 00:06:02.779 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:02.779 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870465 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870465' 00:06:03.038 killing process with pid 870465 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 870465 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 870465 00:06:03.038 00:06:03.038 real 0m1.589s 00:06:03.038 user 0m1.687s 00:06:03.038 sys 0m0.538s 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.038 00:17:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.038 ************************************ 00:06:03.038 END TEST default_locks_via_rpc 00:06:03.038 ************************************ 00:06:03.297 00:17:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.297 00:17:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:03.297 00:17:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.297 00:17:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.297 00:17:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.297 ************************************ 00:06:03.297 START TEST non_locking_app_on_locked_coremask 00:06:03.297 ************************************ 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=870831 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 870831 /var/tmp/spdk.sock 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 870831 ']' 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.297 00:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.297 [2024-07-16 00:17:16.781467] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:03.297 [2024-07-16 00:17:16.781506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870831 ] 00:06:03.297 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.297 [2024-07-16 00:17:16.837537] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.297 [2024-07-16 00:17:16.901766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=871011 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 871011 /var/tmp/spdk2.sock 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 871011 ']' 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.234 00:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.234 [2024-07-16 00:17:17.574026] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:04.234 [2024-07-16 00:17:17.574078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871011 ] 00:06:04.234 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.234 [2024-07-16 00:17:17.670942] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.234 [2024-07-16 00:17:17.670971] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.234 [2024-07-16 00:17:17.799790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.803 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.803 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:04.803 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 870831 00:06:04.803 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 870831 00:06:04.803 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.372 lslocks: write error 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 870831 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 870831 ']' 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 870831 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870831 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870831' 00:06:05.372 killing process with pid 870831 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 870831 00:06:05.372 00:17:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 870831 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 871011 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 871011 ']' 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 871011 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871011 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871011' 00:06:05.963 killing process with pid 871011 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 871011 00:06:05.963 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 871011 00:06:06.223 00:06:06.224 real 0m2.877s 00:06:06.224 user 0m3.125s 00:06:06.224 sys 0m0.858s 00:06:06.224 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.224 00:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.224 ************************************ 00:06:06.224 END TEST non_locking_app_on_locked_coremask 00:06:06.224 ************************************ 00:06:06.224 00:17:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:06.224 00:17:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:06.224 00:17:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.224 00:17:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.224 00:17:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.224 ************************************ 00:06:06.224 START TEST locking_app_on_unlocked_coremask 00:06:06.224 ************************************ 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=871538 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 871538 /var/tmp/spdk.sock 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 871538 ']' 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.224 00:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.224 [2024-07-16 00:17:19.702359] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:06.224 [2024-07-16 00:17:19.702395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871538 ] 00:06:06.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.224 [2024-07-16 00:17:19.758353] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.224 [2024-07-16 00:17:19.758378] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.224 [2024-07-16 00:17:19.823571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=871556 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 871556 /var/tmp/spdk2.sock 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 871556 ']' 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.164 00:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.164 [2024-07-16 00:17:20.538837] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:07.164 [2024-07-16 00:17:20.538892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871556 ] 00:06:07.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.164 [2024-07-16 00:17:20.637658] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.164 [2024-07-16 00:17:20.766778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.734 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.734 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:07.734 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 871556 00:06:07.734 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.734 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 871556 00:06:08.305 lslocks: write error 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 871538 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 871538 ']' 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 871538 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871538 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871538' 00:06:08.305 killing process with pid 871538 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 871538 00:06:08.305 00:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 871538 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 871556 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 871556 ']' 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 871556 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871556 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871556' 00:06:08.876 killing process with pid 871556 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 871556 00:06:08.876 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 871556 00:06:09.137 00:06:09.137 real 0m2.849s 00:06:09.137 user 0m3.116s 00:06:09.137 sys 0m0.838s 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.137 ************************************ 00:06:09.137 END TEST locking_app_on_unlocked_coremask 00:06:09.137 ************************************ 00:06:09.137 00:17:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:09.137 00:17:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:09.137 00:17:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.137 00:17:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.137 00:17:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.137 ************************************ 00:06:09.137 START TEST locking_app_on_locked_coremask 00:06:09.137 ************************************ 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=872090 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 872090 /var/tmp/spdk.sock 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 872090 ']' 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.137 00:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.137 [2024-07-16 00:17:22.626197] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:09.137 [2024-07-16 00:17:22.626241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872090 ] 00:06:09.137 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.137 [2024-07-16 00:17:22.682979] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.137 [2024-07-16 00:17:22.747784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=872256 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 872256 /var/tmp/spdk2.sock 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 872256 /var/tmp/spdk2.sock 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 872256 /var/tmp/spdk2.sock 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 872256 ']' 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.078 00:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.078 [2024-07-16 00:17:23.439322] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:10.078 [2024-07-16 00:17:23.439373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872256 ] 00:06:10.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.078 [2024-07-16 00:17:23.536765] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 872090 has claimed it. 00:06:10.078 [2024-07-16 00:17:23.536807] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (872256) - No such process 00:06:10.651 ERROR: process (pid: 872256) is no longer running 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 872090 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 872090 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.651 lslocks: write error 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 872090 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 872090 ']' 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 872090 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872090 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872090' 00:06:10.651 killing process with pid 872090 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 872090 00:06:10.651 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 872090 00:06:10.912 00:06:10.912 real 0m1.890s 00:06:10.912 user 0m2.097s 00:06:10.912 sys 0m0.477s 00:06:10.912 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.912 00:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.912 ************************************ 00:06:10.912 END TEST locking_app_on_locked_coremask 00:06:10.912 ************************************ 00:06:10.912 00:17:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.912 00:17:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.912 00:17:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.912 00:17:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.912 00:17:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.172 ************************************ 00:06:11.172 START TEST locking_overlapped_coremask 00:06:11.172 ************************************ 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=872607 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 872607 /var/tmp/spdk.sock 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 872607 ']' 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.172 00:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.172 [2024-07-16 00:17:24.627637] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:11.172 [2024-07-16 00:17:24.627694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872607 ] 00:06:11.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.172 [2024-07-16 00:17:24.700640] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.172 [2024-07-16 00:17:24.774881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.172 [2024-07-16 00:17:24.774995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.172 [2024-07-16 00:17:24.774998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=872631 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 872631 /var/tmp/spdk2.sock 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 872631 /var/tmp/spdk2.sock 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 872631 /var/tmp/spdk2.sock 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 872631 ']' 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.166 00:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.166 [2024-07-16 00:17:25.439087] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:12.166 [2024-07-16 00:17:25.439141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872631 ] 00:06:12.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.167 [2024-07-16 00:17:25.521569] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 872607 has claimed it. 00:06:12.167 [2024-07-16 00:17:25.521600] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (872631) - No such process 00:06:12.428 ERROR: process (pid: 872631) is no longer running 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 872607 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 872607 ']' 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 872607 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.428 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872607 00:06:12.689 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.689 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.689 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872607' 00:06:12.689 killing process with pid 872607 00:06:12.689 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 872607 00:06:12.689 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 872607 00:06:12.689 00:06:12.689 real 0m1.757s 00:06:12.689 user 0m4.923s 00:06:12.689 sys 0m0.365s 00:06:12.689 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.689 00:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.689 ************************************ 00:06:12.689 END TEST locking_overlapped_coremask 00:06:12.689 ************************************ 00:06:12.950 00:17:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.950 00:17:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.950 00:17:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.950 00:17:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.950 00:17:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.950 ************************************ 00:06:12.950 START TEST locking_overlapped_coremask_via_rpc 00:06:12.950 ************************************ 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=872992 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 872992 /var/tmp/spdk.sock 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 872992 ']' 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.950 00:17:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.950 [2024-07-16 00:17:26.447223] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:12.950 [2024-07-16 00:17:26.447292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872992 ] 00:06:12.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.950 [2024-07-16 00:17:26.514628] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.950 [2024-07-16 00:17:26.514657] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.211 [2024-07-16 00:17:26.581479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.211 [2024-07-16 00:17:26.581594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.211 [2024-07-16 00:17:26.581597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=873008 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 873008 /var/tmp/spdk2.sock 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 873008 ']' 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.783 00:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.783 [2024-07-16 00:17:27.257767] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:13.783 [2024-07-16 00:17:27.257817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873008 ] 00:06:13.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.783 [2024-07-16 00:17:27.340767] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.783 [2024-07-16 00:17:27.340792] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.044 [2024-07-16 00:17:27.450681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.044 [2024-07-16 00:17:27.450835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.044 [2024-07-16 00:17:27.450837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.638 [2024-07-16 00:17:28.047290] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 872992 has claimed it. 00:06:14.638 request: 00:06:14.638 { 00:06:14.638 "method": "framework_enable_cpumask_locks", 00:06:14.638 "req_id": 1 00:06:14.638 } 00:06:14.638 Got JSON-RPC error response 00:06:14.638 response: 00:06:14.638 { 00:06:14.638 "code": -32603, 00:06:14.638 "message": "Failed to claim CPU core: 2" 00:06:14.638 } 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 872992 /var/tmp/spdk.sock 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 872992 ']' 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 873008 /var/tmp/spdk2.sock 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 873008 ']' 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.638 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.899 00:06:14.899 real 0m2.006s 00:06:14.899 user 0m0.777s 00:06:14.899 sys 0m0.152s 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.899 00:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 ************************************ 00:06:14.899 END TEST locking_overlapped_coremask_via_rpc 00:06:14.899 ************************************ 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:14.899 00:17:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.899 00:17:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 872992 ]] 00:06:14.899 00:17:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 872992 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 872992 ']' 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 872992 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872992 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872992' 00:06:14.899 killing process with pid 872992 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 872992 00:06:14.899 00:17:28 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 872992 00:06:15.160 00:17:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 873008 ]] 00:06:15.160 00:17:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 873008 00:06:15.160 00:17:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 873008 ']' 00:06:15.160 00:17:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 873008 00:06:15.160 00:17:28 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:15.160 00:17:28 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.160 00:17:28 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 873008 00:06:15.160 00:17:28 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:15.160 00:17:28 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:15.161 00:17:28 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 873008' 00:06:15.161 killing process with pid 873008 00:06:15.161 00:17:28 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 873008 00:06:15.161 00:17:28 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 873008 00:06:15.421 00:17:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.421 00:17:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:15.421 00:17:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 872992 ]] 00:06:15.421 00:17:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 872992 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 872992 ']' 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 872992 00:06:15.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (872992) - No such process 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 872992 is not found' 00:06:15.422 Process with pid 872992 is not found 00:06:15.422 00:17:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 873008 ]] 00:06:15.422 00:17:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 873008 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 873008 ']' 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 873008 00:06:15.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (873008) - No such process 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 873008 is not found' 00:06:15.422 Process with pid 873008 is not found 00:06:15.422 00:17:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.422 00:06:15.422 real 0m15.515s 00:06:15.422 user 0m26.806s 00:06:15.422 sys 0m4.558s 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.422 00:17:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.422 ************************************ 00:06:15.422 END TEST cpu_locks 00:06:15.422 ************************************ 00:06:15.422 00:17:28 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.422 00:06:15.422 real 0m40.888s 00:06:15.422 user 1m19.659s 00:06:15.422 sys 0m7.563s 00:06:15.422 00:17:28 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.422 00:17:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.422 ************************************ 00:06:15.422 END TEST event 00:06:15.422 ************************************ 00:06:15.422 00:17:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.422 00:17:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.422 00:17:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.422 00:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.422 00:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.683 ************************************ 00:06:15.683 START TEST thread 00:06:15.683 ************************************ 00:06:15.683 00:17:29 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.683 * Looking for test storage... 00:06:15.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:15.683 00:17:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.683 00:17:29 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:15.683 00:17:29 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.683 00:17:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.683 ************************************ 00:06:15.683 START TEST thread_poller_perf 00:06:15.683 ************************************ 00:06:15.683 00:17:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.683 [2024-07-16 00:17:29.227586] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:15.683 [2024-07-16 00:17:29.227672] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873516 ] 00:06:15.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.683 [2024-07-16 00:17:29.300436] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.945 [2024-07-16 00:17:29.374624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.945 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.888 ====================================== 00:06:16.888 busy:2412018670 (cyc) 00:06:16.888 total_run_count: 285000 00:06:16.888 tsc_hz: 2400000000 (cyc) 00:06:16.888 ====================================== 00:06:16.888 poller_cost: 8463 (cyc), 3526 (nsec) 00:06:16.888 00:06:16.888 real 0m1.232s 00:06:16.888 user 0m1.149s 00:06:16.888 sys 0m0.079s 00:06:16.888 00:17:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.888 00:17:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.888 ************************************ 00:06:16.888 END TEST thread_poller_perf 00:06:16.888 ************************************ 00:06:16.888 00:17:30 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:16.888 00:17:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.888 00:17:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:16.888 00:17:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.888 00:17:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.888 ************************************ 00:06:16.888 START TEST thread_poller_perf 00:06:16.888 ************************************ 00:06:16.888 00:17:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.888 [2024-07-16 00:17:30.514765] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:16.888 [2024-07-16 00:17:30.514802] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873799 ] 00:06:17.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.149 [2024-07-16 00:17:30.571024] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.149 [2024-07-16 00:17:30.634521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.149 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.091 ====================================== 00:06:18.091 busy:2402116442 (cyc) 00:06:18.091 total_run_count: 3805000 00:06:18.091 tsc_hz: 2400000000 (cyc) 00:06:18.091 ====================================== 00:06:18.091 poller_cost: 631 (cyc), 262 (nsec) 00:06:18.091 00:06:18.091 real 0m1.180s 00:06:18.091 user 0m1.118s 00:06:18.091 sys 0m0.058s 00:06:18.091 00:17:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.091 00:17:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.091 ************************************ 00:06:18.091 END TEST thread_poller_perf 00:06:18.091 ************************************ 00:06:18.091 00:17:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:18.091 00:17:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.091 00:06:18.091 real 0m2.652s 00:06:18.091 user 0m2.349s 00:06:18.091 sys 0m0.309s 00:06:18.091 00:17:31 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.091 00:17:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.091 ************************************ 00:06:18.091 END TEST thread 00:06:18.091 ************************************ 00:06:18.353 00:17:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.353 00:17:31 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:18.353 00:17:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.353 00:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.353 00:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:18.353 ************************************ 00:06:18.353 START TEST accel 00:06:18.353 ************************************ 00:06:18.353 00:17:31 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:18.353 * Looking for test storage... 00:06:18.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:18.353 00:17:31 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:18.353 00:17:31 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:18.353 00:17:31 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.353 00:17:31 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=874194 00:06:18.353 00:17:31 accel -- accel/accel.sh@63 -- # waitforlisten 874194 00:06:18.353 00:17:31 accel -- common/autotest_common.sh@829 -- # '[' -z 874194 ']' 00:06:18.353 00:17:31 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.353 00:17:31 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.353 00:17:31 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.353 00:17:31 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:18.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.353 00:17:31 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.353 00:17:31 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:18.353 00:17:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.353 00:17:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.353 00:17:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.353 00:17:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.353 00:17:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.353 00:17:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.353 00:17:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:18.353 00:17:31 accel -- accel/accel.sh@41 -- # jq -r . 00:06:18.353 [2024-07-16 00:17:31.935100] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:18.353 [2024-07-16 00:17:31.935152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874194 ] 00:06:18.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.615 [2024-07-16 00:17:32.002490] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.615 [2024-07-16 00:17:32.070661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.186 00:17:32 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.186 00:17:32 accel -- common/autotest_common.sh@862 -- # return 0 00:06:19.186 00:17:32 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:19.186 00:17:32 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:19.186 00:17:32 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:19.186 00:17:32 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:19.186 00:17:32 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:19.186 00:17:32 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:19.186 00:17:32 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.186 00:17:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 00:17:32 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:19.186 00:17:32 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.187 00:17:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.187 00:17:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.187 00:17:32 accel -- accel/accel.sh@75 -- # killprocess 874194 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@948 -- # '[' -z 874194 ']' 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@952 -- # kill -0 874194 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@953 -- # uname 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 874194 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 874194' 00:06:19.187 killing process with pid 874194 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@967 -- # kill 874194 00:06:19.187 00:17:32 accel -- common/autotest_common.sh@972 -- # wait 874194 00:06:19.448 00:17:33 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:19.448 00:17:33 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:19.448 00:17:33 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:19.448 00:17:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.448 00:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.448 00:17:33 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:19.448 00:17:33 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:19.448 00:17:33 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.448 00:17:33 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:19.734 00:17:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.734 00:17:33 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:19.734 00:17:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.734 00:17:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.734 00:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.734 ************************************ 00:06:19.734 START TEST accel_missing_filename 00:06:19.734 ************************************ 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.734 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:19.734 00:17:33 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:19.734 [2024-07-16 00:17:33.170258] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:19.734 [2024-07-16 00:17:33.170328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874563 ] 00:06:19.734 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.734 [2024-07-16 00:17:33.241839] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.734 [2024-07-16 00:17:33.315436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.734 [2024-07-16 00:17:33.347751] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.994 [2024-07-16 00:17:33.384637] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:19.994 A filename is required. 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.994 00:06:19.994 real 0m0.297s 00:06:19.994 user 0m0.224s 00:06:19.994 sys 0m0.114s 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.994 00:17:33 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:19.994 ************************************ 00:06:19.994 END TEST accel_missing_filename 00:06:19.994 ************************************ 00:06:19.994 00:17:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.994 00:17:33 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.994 00:17:33 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:19.994 00:17:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.994 00:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.994 ************************************ 00:06:19.994 START TEST accel_compress_verify 00:06:19.994 ************************************ 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.994 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:19.994 00:17:33 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:19.994 [2024-07-16 00:17:33.536801] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:19.994 [2024-07-16 00:17:33.536894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874584 ] 00:06:19.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.994 [2024-07-16 00:17:33.604655] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.254 [2024-07-16 00:17:33.668433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.254 [2024-07-16 00:17:33.700069] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.254 [2024-07-16 00:17:33.737111] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:20.254 00:06:20.254 Compression does not support the verify option, aborting. 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.254 00:06:20.254 real 0m0.284s 00:06:20.254 user 0m0.201s 00:06:20.254 sys 0m0.106s 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.254 00:17:33 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:20.254 ************************************ 00:06:20.254 END TEST accel_compress_verify 00:06:20.254 ************************************ 00:06:20.254 00:17:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.254 00:17:33 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:20.254 00:17:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.254 00:17:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.254 00:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.254 ************************************ 00:06:20.254 START TEST accel_wrong_workload 00:06:20.254 ************************************ 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.254 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:20.254 00:17:33 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:20.515 Unsupported workload type: foobar 00:06:20.515 [2024-07-16 00:17:33.892321] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:20.515 accel_perf options: 00:06:20.515 [-h help message] 00:06:20.515 [-q queue depth per core] 00:06:20.515 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.515 [-T number of threads per core 00:06:20.515 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.515 [-t time in seconds] 00:06:20.515 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.515 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:20.515 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.515 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.515 [-S for crc32c workload, use this seed value (default 0) 00:06:20.515 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.515 [-f for fill workload, use this BYTE value (default 255) 00:06:20.515 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.515 [-y verify result if this switch is on] 00:06:20.515 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.515 Can be used to spread operations across a wider range of memory. 00:06:20.515 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:20.515 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.515 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.515 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.515 00:06:20.515 real 0m0.037s 00:06:20.515 user 0m0.025s 00:06:20.515 sys 0m0.012s 00:06:20.515 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.515 00:17:33 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:20.515 ************************************ 00:06:20.515 END TEST accel_wrong_workload 00:06:20.515 ************************************ 00:06:20.515 Error: writing output failed: Broken pipe 00:06:20.515 00:17:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.515 00:17:33 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:20.515 00:17:33 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:20.515 00:17:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.515 00:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.515 ************************************ 00:06:20.515 START TEST accel_negative_buffers 00:06:20.515 ************************************ 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.515 00:17:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:20.516 00:17:33 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:20.516 -x option must be non-negative. 00:06:20.516 [2024-07-16 00:17:34.003825] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:20.516 accel_perf options: 00:06:20.516 [-h help message] 00:06:20.516 [-q queue depth per core] 00:06:20.516 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.516 [-T number of threads per core 00:06:20.516 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.516 [-t time in seconds] 00:06:20.516 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.516 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:20.516 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.516 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.516 [-S for crc32c workload, use this seed value (default 0) 00:06:20.516 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.516 [-f for fill workload, use this BYTE value (default 255) 00:06:20.516 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.516 [-y verify result if this switch is on] 00:06:20.516 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.516 Can be used to spread operations across a wider range of memory. 00:06:20.516 00:17:34 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:20.516 00:17:34 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.516 00:17:34 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.516 00:17:34 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.516 00:06:20.516 real 0m0.036s 00:06:20.516 user 0m0.022s 00:06:20.516 sys 0m0.014s 00:06:20.516 00:17:34 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.516 00:17:34 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:20.516 ************************************ 00:06:20.516 END TEST accel_negative_buffers 00:06:20.516 ************************************ 00:06:20.516 Error: writing output failed: Broken pipe 00:06:20.516 00:17:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.516 00:17:34 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:20.516 00:17:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:20.516 00:17:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.516 00:17:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.516 ************************************ 00:06:20.516 START TEST accel_crc32c 00:06:20.516 ************************************ 00:06:20.516 00:17:34 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:20.516 00:17:34 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:20.516 [2024-07-16 00:17:34.108588] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:20.516 [2024-07-16 00:17:34.108713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874653 ] 00:06:20.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.776 [2024-07-16 00:17:34.186970] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.776 [2024-07-16 00:17:34.258077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.776 00:17:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.843 00:17:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.843 00:06:21.843 real 0m1.308s 00:06:21.843 user 0m1.201s 00:06:21.843 sys 0m0.119s 00:06:21.843 00:17:35 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.843 00:17:35 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:21.843 ************************************ 00:06:21.843 END TEST accel_crc32c 00:06:21.843 ************************************ 00:06:21.843 00:17:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.843 00:17:35 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:21.843 00:17:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.843 00:17:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.843 00:17:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.843 ************************************ 00:06:21.843 START TEST accel_crc32c_C2 00:06:21.843 ************************************ 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.843 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:22.104 [2024-07-16 00:17:35.491578] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:22.104 [2024-07-16 00:17:35.491637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875006 ] 00:06:22.104 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.104 [2024-07-16 00:17:35.558695] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.104 [2024-07-16 00:17:35.623507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.104 00:17:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.487 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.488 00:06:23.488 real 0m1.291s 00:06:23.488 user 0m1.202s 00:06:23.488 sys 0m0.101s 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.488 00:17:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:23.488 ************************************ 00:06:23.488 END TEST accel_crc32c_C2 00:06:23.488 ************************************ 00:06:23.488 00:17:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.488 00:17:36 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:23.488 00:17:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.488 00:17:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.488 00:17:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.488 ************************************ 00:06:23.488 START TEST accel_copy 00:06:23.488 ************************************ 00:06:23.488 00:17:36 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:23.488 [2024-07-16 00:17:36.831017] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:23.488 [2024-07-16 00:17:36.831053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875361 ] 00:06:23.488 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.488 [2024-07-16 00:17:36.887184] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.488 [2024-07-16 00:17:36.950677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.488 00:17:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:24.900 00:17:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.900 00:06:24.900 real 0m1.260s 00:06:24.900 user 0m1.183s 00:06:24.900 sys 0m0.088s 00:06:24.900 00:17:38 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.900 00:17:38 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.900 ************************************ 00:06:24.900 END TEST accel_copy 00:06:24.900 ************************************ 00:06:24.900 00:17:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.900 00:17:38 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.900 00:17:38 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:24.900 00:17:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.900 00:17:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.900 ************************************ 00:06:24.900 START TEST accel_fill 00:06:24.900 ************************************ 00:06:24.900 00:17:38 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:24.900 [2024-07-16 00:17:38.181185] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:24.900 [2024-07-16 00:17:38.181256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875706 ] 00:06:24.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.900 [2024-07-16 00:17:38.249706] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.900 [2024-07-16 00:17:38.315573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.900 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.901 00:17:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:25.844 00:17:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.844 00:06:25.844 real 0m1.292s 00:06:25.844 user 0m1.196s 00:06:25.844 sys 0m0.109s 00:06:25.844 00:17:39 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.844 00:17:39 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:25.844 ************************************ 00:06:25.844 END TEST accel_fill 00:06:25.844 ************************************ 00:06:26.106 00:17:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.106 00:17:39 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:26.106 00:17:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.106 00:17:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.106 00:17:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.106 ************************************ 00:06:26.106 START TEST accel_copy_crc32c 00:06:26.106 ************************************ 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:26.106 [2024-07-16 00:17:39.545709] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:26.106 [2024-07-16 00:17:39.545781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875895 ] 00:06:26.106 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.106 [2024-07-16 00:17:39.616411] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.106 [2024-07-16 00:17:39.685809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.106 00:17:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.493 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:27.494 00:17:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.494 00:06:27.494 real 0m1.297s 00:06:27.494 user 0m1.195s 00:06:27.494 sys 0m0.116s 00:06:27.494 00:17:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.494 00:17:40 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:27.494 ************************************ 00:06:27.494 END TEST accel_copy_crc32c 00:06:27.494 ************************************ 00:06:27.494 00:17:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.494 00:17:40 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.494 00:17:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.494 00:17:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.494 00:17:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.494 ************************************ 00:06:27.494 START TEST accel_copy_crc32c_C2 00:06:27.494 ************************************ 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.494 00:17:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:27.494 [2024-07-16 00:17:40.920261] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:27.494 [2024-07-16 00:17:40.920351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876109 ] 00:06:27.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.494 [2024-07-16 00:17:40.992191] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.494 [2024-07-16 00:17:41.062554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.494 00:17:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.880 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.881 00:06:28.881 real 0m1.301s 00:06:28.881 user 0m1.202s 00:06:28.881 sys 0m0.112s 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.881 00:17:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:28.881 ************************************ 00:06:28.881 END TEST accel_copy_crc32c_C2 00:06:28.881 ************************************ 00:06:28.881 00:17:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.881 00:17:42 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:28.881 00:17:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:28.881 00:17:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.881 00:17:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.881 ************************************ 00:06:28.881 START TEST accel_dualcast 00:06:28.881 ************************************ 00:06:28.881 00:17:42 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:28.881 [2024-07-16 00:17:42.297246] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:28.881 [2024-07-16 00:17:42.297341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876447 ] 00:06:28.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.881 [2024-07-16 00:17:42.367360] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.881 [2024-07-16 00:17:42.439065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.881 00:17:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:30.284 00:17:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.284 00:06:30.284 real 0m1.301s 00:06:30.284 user 0m1.199s 00:06:30.284 sys 0m0.113s 00:06:30.284 00:17:43 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.284 00:17:43 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:30.284 ************************************ 00:06:30.284 END TEST accel_dualcast 00:06:30.284 ************************************ 00:06:30.284 00:17:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.284 00:17:43 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:30.284 00:17:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.284 00:17:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.284 00:17:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.284 ************************************ 00:06:30.284 START TEST accel_compare 00:06:30.284 ************************************ 00:06:30.284 00:17:43 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:30.284 [2024-07-16 00:17:43.671413] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:30.284 [2024-07-16 00:17:43.671476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876796 ] 00:06:30.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.284 [2024-07-16 00:17:43.741014] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.284 [2024-07-16 00:17:43.810624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.284 00:17:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:31.669 00:17:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.669 00:06:31.669 real 0m1.297s 00:06:31.669 user 0m1.201s 00:06:31.669 sys 0m0.107s 00:06:31.669 00:17:44 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.669 00:17:44 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:31.669 ************************************ 00:06:31.669 END TEST accel_compare 00:06:31.669 ************************************ 00:06:31.669 00:17:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.669 00:17:44 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:31.669 00:17:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:31.669 00:17:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.669 00:17:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.669 ************************************ 00:06:31.669 START TEST accel_xor 00:06:31.669 ************************************ 00:06:31.669 00:17:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:31.669 [2024-07-16 00:17:45.044012] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:31.669 [2024-07-16 00:17:45.044108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877148 ] 00:06:31.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.669 [2024-07-16 00:17:45.114923] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.669 [2024-07-16 00:17:45.185497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.669 00:17:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.670 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.670 00:17:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.055 00:06:33.055 real 0m1.301s 00:06:33.055 user 0m1.198s 00:06:33.055 sys 0m0.114s 00:06:33.055 00:17:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.055 00:17:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:33.055 ************************************ 00:06:33.055 END TEST accel_xor 00:06:33.055 ************************************ 00:06:33.055 00:17:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.055 00:17:46 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:33.055 00:17:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:33.055 00:17:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.055 00:17:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.055 ************************************ 00:06:33.055 START TEST accel_xor 00:06:33.055 ************************************ 00:06:33.055 00:17:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:33.055 [2024-07-16 00:17:46.417903] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:33.055 [2024-07-16 00:17:46.417983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877369 ] 00:06:33.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.055 [2024-07-16 00:17:46.487956] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.055 [2024-07-16 00:17:46.558118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.055 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.056 00:17:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:34.441 00:17:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.441 00:06:34.441 real 0m1.297s 00:06:34.441 user 0m1.197s 00:06:34.441 sys 0m0.111s 00:06:34.441 00:17:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.441 00:17:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:34.442 ************************************ 00:06:34.442 END TEST accel_xor 00:06:34.442 ************************************ 00:06:34.442 00:17:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.442 00:17:47 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:34.442 00:17:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:34.442 00:17:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.442 00:17:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.442 ************************************ 00:06:34.442 START TEST accel_dif_verify 00:06:34.442 ************************************ 00:06:34.442 00:17:47 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:34.442 [2024-07-16 00:17:47.792333] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:34.442 [2024-07-16 00:17:47.792429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877557 ] 00:06:34.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.442 [2024-07-16 00:17:47.862438] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.442 [2024-07-16 00:17:47.931594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.442 00:17:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.828 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:35.829 00:17:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.829 00:06:35.829 real 0m1.299s 00:06:35.829 user 0m1.203s 00:06:35.829 sys 0m0.109s 00:06:35.829 00:17:49 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.829 00:17:49 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:35.829 ************************************ 00:06:35.829 END TEST accel_dif_verify 00:06:35.829 ************************************ 00:06:35.829 00:17:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.829 00:17:49 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:35.829 00:17:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:35.829 00:17:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.829 00:17:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.829 ************************************ 00:06:35.829 START TEST accel_dif_generate 00:06:35.829 ************************************ 00:06:35.829 00:17:49 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:35.829 [2024-07-16 00:17:49.167199] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:35.829 [2024-07-16 00:17:49.167274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877886 ] 00:06:35.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.829 [2024-07-16 00:17:49.235522] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.829 [2024-07-16 00:17:49.299431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.829 00:17:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:37.216 00:17:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.216 00:06:37.216 real 0m1.291s 00:06:37.216 user 0m1.199s 00:06:37.216 sys 0m0.103s 00:06:37.216 00:17:50 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.216 00:17:50 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:37.216 ************************************ 00:06:37.216 END TEST accel_dif_generate 00:06:37.216 ************************************ 00:06:37.216 00:17:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.216 00:17:50 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:37.216 00:17:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:37.216 00:17:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.216 00:17:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.216 ************************************ 00:06:37.216 START TEST accel_dif_generate_copy 00:06:37.216 ************************************ 00:06:37.216 00:17:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:37.216 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:37.216 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:37.216 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.216 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.216 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:37.217 [2024-07-16 00:17:50.532964] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:37.217 [2024-07-16 00:17:50.533051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878240 ] 00:06:37.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.217 [2024-07-16 00:17:50.599678] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.217 [2024-07-16 00:17:50.667348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.217 00:17:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.598 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.598 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.598 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.598 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.598 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.598 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.599 00:06:38.599 real 0m1.293s 00:06:38.599 user 0m1.198s 00:06:38.599 sys 0m0.107s 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.599 00:17:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:38.599 ************************************ 00:06:38.599 END TEST accel_dif_generate_copy 00:06:38.599 ************************************ 00:06:38.599 00:17:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.599 00:17:51 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:38.599 00:17:51 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.599 00:17:51 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:38.599 00:17:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.599 00:17:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.599 ************************************ 00:06:38.599 START TEST accel_comp 00:06:38.599 ************************************ 00:06:38.599 00:17:51 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:38.599 00:17:51 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:38.599 [2024-07-16 00:17:51.899403] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:38.599 [2024-07-16 00:17:51.899471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878589 ] 00:06:38.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.599 [2024-07-16 00:17:51.969281] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.599 [2024-07-16 00:17:52.038688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.599 00:17:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.539 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:39.801 00:17:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.801 00:06:39.801 real 0m1.300s 00:06:39.801 user 0m1.205s 00:06:39.801 sys 0m0.107s 00:06:39.801 00:17:53 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.801 00:17:53 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 ************************************ 00:06:39.801 END TEST accel_comp 00:06:39.801 ************************************ 00:06:39.801 00:17:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.801 00:17:53 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.801 00:17:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:39.801 00:17:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.801 00:17:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 ************************************ 00:06:39.801 START TEST accel_decomp 00:06:39.801 ************************************ 00:06:39.802 00:17:53 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:39.802 00:17:53 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:39.802 [2024-07-16 00:17:53.275326] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:39.802 [2024-07-16 00:17:53.275398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878856 ] 00:06:39.802 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.802 [2024-07-16 00:17:53.345430] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.802 [2024-07-16 00:17:53.416328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.079 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.079 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.079 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.080 00:17:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.021 00:17:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.021 00:06:41.021 real 0m1.302s 00:06:41.021 user 0m1.203s 00:06:41.021 sys 0m0.112s 00:06:41.021 00:17:54 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.021 00:17:54 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:41.021 ************************************ 00:06:41.021 END TEST accel_decomp 00:06:41.021 ************************************ 00:06:41.021 00:17:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.021 00:17:54 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.021 00:17:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:41.021 00:17:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.021 00:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.021 ************************************ 00:06:41.021 START TEST accel_decomp_full 00:06:41.021 ************************************ 00:06:41.021 00:17:54 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:41.021 00:17:54 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:41.281 [2024-07-16 00:17:54.655912] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:41.281 [2024-07-16 00:17:54.656018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879040 ] 00:06:41.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.281 [2024-07-16 00:17:54.727661] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.281 [2024-07-16 00:17:54.795597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.281 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:41.282 00:17:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.694 00:17:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.694 00:06:42.694 real 0m1.314s 00:06:42.694 user 0m1.210s 00:06:42.694 sys 0m0.117s 00:06:42.694 00:17:55 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.694 00:17:55 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:42.694 ************************************ 00:06:42.694 END TEST accel_decomp_full 00:06:42.694 ************************************ 00:06:42.694 00:17:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.694 00:17:55 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.694 00:17:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:42.694 00:17:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.694 00:17:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.694 ************************************ 00:06:42.694 START TEST accel_decomp_mcore 00:06:42.694 ************************************ 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:42.694 [2024-07-16 00:17:56.038491] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:42.694 [2024-07-16 00:17:56.038558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879330 ] 00:06:42.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.694 [2024-07-16 00:17:56.108722] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.694 [2024-07-16 00:17:56.183694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.694 [2024-07-16 00:17:56.183810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.694 [2024-07-16 00:17:56.183967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.694 [2024-07-16 00:17:56.183967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.694 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.695 00:17:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.080 00:06:44.080 real 0m1.313s 00:06:44.080 user 0m4.446s 00:06:44.080 sys 0m0.115s 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.080 00:17:57 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:44.080 ************************************ 00:06:44.080 END TEST accel_decomp_mcore 00:06:44.080 ************************************ 00:06:44.080 00:17:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.080 00:17:57 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.080 00:17:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:44.080 00:17:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.080 00:17:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.080 ************************************ 00:06:44.080 START TEST accel_decomp_full_mcore 00:06:44.080 ************************************ 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:44.080 [2024-07-16 00:17:57.427585] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:44.080 [2024-07-16 00:17:57.427675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879686 ] 00:06:44.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.080 [2024-07-16 00:17:57.498840] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.080 [2024-07-16 00:17:57.569834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.080 [2024-07-16 00:17:57.569950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.080 [2024-07-16 00:17:57.570108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.080 [2024-07-16 00:17:57.570108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.080 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.081 00:17:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.467 00:06:45.467 real 0m1.326s 00:06:45.467 user 0m4.502s 00:06:45.467 sys 0m0.116s 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.467 00:17:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:45.467 ************************************ 00:06:45.467 END TEST accel_decomp_full_mcore 00:06:45.467 ************************************ 00:06:45.467 00:17:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.467 00:17:58 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.467 00:17:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:45.467 00:17:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.467 00:17:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.467 ************************************ 00:06:45.467 START TEST accel_decomp_mthread 00:06:45.467 ************************************ 00:06:45.467 00:17:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.467 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:45.467 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:45.467 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:45.468 00:17:58 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:45.468 [2024-07-16 00:17:58.827019] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:45.468 [2024-07-16 00:17:58.827111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880038 ] 00:06:45.468 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.468 [2024-07-16 00:17:58.905327] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.468 [2024-07-16 00:17:58.976307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.468 00:17:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.852 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.853 00:06:46.853 real 0m1.316s 00:06:46.853 user 0m1.210s 00:06:46.853 sys 0m0.119s 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.853 00:18:00 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:46.853 ************************************ 00:06:46.853 END TEST accel_decomp_mthread 00:06:46.853 ************************************ 00:06:46.853 00:18:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.853 00:18:00 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.853 00:18:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.853 00:18:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.853 00:18:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.853 ************************************ 00:06:46.853 START TEST accel_decomp_full_mthread 00:06:46.853 ************************************ 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:46.853 [2024-07-16 00:18:00.213735] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:46.853 [2024-07-16 00:18:00.213800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880399 ] 00:06:46.853 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.853 [2024-07-16 00:18:00.283155] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.853 [2024-07-16 00:18:00.351449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.853 00:18:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.263 00:06:48.263 real 0m1.331s 00:06:48.263 user 0m1.240s 00:06:48.263 sys 0m0.103s 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.263 00:18:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:48.263 ************************************ 00:06:48.263 END TEST accel_decomp_full_mthread 00:06:48.263 ************************************ 00:06:48.263 00:18:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.263 00:18:01 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:48.263 00:18:01 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:48.263 00:18:01 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:48.263 00:18:01 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:48.263 00:18:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.263 00:18:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.263 00:18:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.263 00:18:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.263 00:18:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.263 00:18:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.264 00:18:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.264 00:18:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:48.264 00:18:01 accel -- accel/accel.sh@41 -- # jq -r . 00:06:48.264 ************************************ 00:06:48.264 START TEST accel_dif_functional_tests 00:06:48.264 ************************************ 00:06:48.264 00:18:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:48.264 [2024-07-16 00:18:01.643431] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:48.264 [2024-07-16 00:18:01.643488] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880687 ] 00:06:48.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.264 [2024-07-16 00:18:01.713741] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.264 [2024-07-16 00:18:01.791262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.264 [2024-07-16 00:18:01.791355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.264 [2024-07-16 00:18:01.791519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.264 00:06:48.264 00:06:48.264 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.264 http://cunit.sourceforge.net/ 00:06:48.264 00:06:48.264 00:06:48.264 Suite: accel_dif 00:06:48.264 Test: verify: DIF generated, GUARD check ...passed 00:06:48.264 Test: verify: DIF generated, APPTAG check ...passed 00:06:48.264 Test: verify: DIF generated, REFTAG check ...passed 00:06:48.264 Test: verify: DIF not generated, GUARD check ...[2024-07-16 00:18:01.847221] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:48.264 passed 00:06:48.264 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 00:18:01.847440] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:48.264 passed 00:06:48.264 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 00:18:01.847463] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:48.264 passed 00:06:48.264 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:48.264 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 00:18:01.847514] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:48.264 passed 00:06:48.264 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:48.264 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:48.264 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:48.264 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 00:18:01.847628] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:48.264 passed 00:06:48.264 Test: verify copy: DIF generated, GUARD check ...passed 00:06:48.264 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:48.264 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:48.264 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 00:18:01.847750] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:48.264 passed 00:06:48.264 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 00:18:01.847773] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:48.264 passed 00:06:48.264 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 00:18:01.847793] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:48.264 passed 00:06:48.264 Test: generate copy: DIF generated, GUARD check ...passed 00:06:48.264 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:48.264 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:48.264 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:48.264 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:48.264 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:48.264 Test: generate copy: iovecs-len validate ...[2024-07-16 00:18:01.847979] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:48.264 passed 00:06:48.264 Test: generate copy: buffer alignment validate ...passed 00:06:48.264 00:06:48.264 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.264 suites 1 1 n/a 0 0 00:06:48.264 tests 26 26 26 0 0 00:06:48.264 asserts 115 115 115 0 n/a 00:06:48.264 00:06:48.264 Elapsed time = 0.002 seconds 00:06:48.525 00:06:48.525 real 0m0.372s 00:06:48.525 user 0m0.496s 00:06:48.525 sys 0m0.140s 00:06:48.525 00:18:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.525 00:18:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:48.525 ************************************ 00:06:48.525 END TEST accel_dif_functional_tests 00:06:48.525 ************************************ 00:06:48.525 00:18:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.525 00:06:48.525 real 0m30.222s 00:06:48.525 user 0m33.729s 00:06:48.525 sys 0m4.205s 00:06:48.525 00:18:02 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.525 00:18:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.525 ************************************ 00:06:48.525 END TEST accel 00:06:48.525 ************************************ 00:06:48.525 00:18:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.526 00:18:02 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.526 00:18:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.526 00:18:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.526 00:18:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.526 ************************************ 00:06:48.526 START TEST accel_rpc 00:06:48.526 ************************************ 00:06:48.526 00:18:02 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.786 * Looking for test storage... 00:06:48.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:48.786 00:18:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.786 00:18:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=880914 00:06:48.786 00:18:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 880914 00:06:48.786 00:18:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:48.786 00:18:02 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 880914 ']' 00:06:48.786 00:18:02 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.786 00:18:02 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.786 00:18:02 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.786 00:18:02 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.786 00:18:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.786 [2024-07-16 00:18:02.236246] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:48.786 [2024-07-16 00:18:02.236317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880914 ] 00:06:48.786 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.786 [2024-07-16 00:18:02.310008] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.786 [2024-07-16 00:18:02.384053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.728 00:18:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:49.728 00:18:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:49.728 00:18:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:49.728 00:18:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:49.728 00:18:03 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.728 ************************************ 00:06:49.728 START TEST accel_assign_opcode 00:06:49.728 ************************************ 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.728 [2024-07-16 00:18:03.062031] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.728 [2024-07-16 00:18:03.074057] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.728 software 00:06:49.728 00:06:49.728 real 0m0.216s 00:06:49.728 user 0m0.047s 00:06:49.728 sys 0m0.011s 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.728 00:18:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:49.728 ************************************ 00:06:49.728 END TEST accel_assign_opcode 00:06:49.728 ************************************ 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:49.728 00:18:03 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 880914 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 880914 ']' 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 880914 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.728 00:18:03 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880914 00:06:49.992 00:18:03 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.992 00:18:03 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.992 00:18:03 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880914' 00:06:49.992 killing process with pid 880914 00:06:49.992 00:18:03 accel_rpc -- common/autotest_common.sh@967 -- # kill 880914 00:06:49.992 00:18:03 accel_rpc -- common/autotest_common.sh@972 -- # wait 880914 00:06:49.992 00:06:49.992 real 0m1.500s 00:06:49.992 user 0m1.594s 00:06:49.992 sys 0m0.414s 00:06:49.992 00:18:03 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.992 00:18:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.992 ************************************ 00:06:49.992 END TEST accel_rpc 00:06:49.992 ************************************ 00:06:49.992 00:18:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.992 00:18:03 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:49.992 00:18:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.992 00:18:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.992 00:18:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.251 ************************************ 00:06:50.251 START TEST app_cmdline 00:06:50.251 ************************************ 00:06:50.251 00:18:03 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:50.251 * Looking for test storage... 00:06:50.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:50.251 00:18:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:50.251 00:18:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:50.251 00:18:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=881325 00:06:50.251 00:18:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 881325 00:06:50.251 00:18:03 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 881325 ']' 00:06:50.251 00:18:03 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.251 00:18:03 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.251 00:18:03 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.251 00:18:03 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.251 00:18:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.251 [2024-07-16 00:18:03.790291] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:50.252 [2024-07-16 00:18:03.790340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881325 ] 00:06:50.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.252 [2024-07-16 00:18:03.853824] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.511 [2024-07-16 00:18:03.918486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.082 00:18:04 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.082 00:18:04 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:51.082 { 00:06:51.082 "version": "SPDK v24.09-pre git sha1 fcbf7f00f", 00:06:51.082 "fields": { 00:06:51.082 "major": 24, 00:06:51.082 "minor": 9, 00:06:51.082 "patch": 0, 00:06:51.082 "suffix": "-pre", 00:06:51.082 "commit": "fcbf7f00f" 00:06:51.082 } 00:06:51.082 } 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.082 00:18:04 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.082 00:18:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:51.082 00:18:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.082 00:18:04 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.343 00:18:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.343 00:18:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.343 00:18:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.343 request: 00:06:51.343 { 00:06:51.343 "method": "env_dpdk_get_mem_stats", 00:06:51.343 "req_id": 1 00:06:51.343 } 00:06:51.343 Got JSON-RPC error response 00:06:51.343 response: 00:06:51.343 { 00:06:51.343 "code": -32601, 00:06:51.343 "message": "Method not found" 00:06:51.343 } 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.343 00:18:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 881325 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 881325 ']' 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 881325 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881325 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881325' 00:06:51.343 killing process with pid 881325 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@967 -- # kill 881325 00:06:51.343 00:18:04 app_cmdline -- common/autotest_common.sh@972 -- # wait 881325 00:06:51.603 00:06:51.603 real 0m1.487s 00:06:51.603 user 0m1.758s 00:06:51.603 sys 0m0.374s 00:06:51.603 00:18:05 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.603 00:18:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 ************************************ 00:06:51.603 END TEST app_cmdline 00:06:51.603 ************************************ 00:06:51.603 00:18:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.603 00:18:05 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:51.603 00:18:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.603 00:18:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.603 00:18:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 ************************************ 00:06:51.603 START TEST version 00:06:51.603 ************************************ 00:06:51.603 00:18:05 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:51.865 * Looking for test storage... 00:06:51.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:51.865 00:18:05 version -- app/version.sh@17 -- # get_header_version major 00:06:51.865 00:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # cut -f2 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.865 00:18:05 version -- app/version.sh@17 -- # major=24 00:06:51.865 00:18:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:51.865 00:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # cut -f2 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.865 00:18:05 version -- app/version.sh@18 -- # minor=9 00:06:51.865 00:18:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:51.865 00:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # cut -f2 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.865 00:18:05 version -- app/version.sh@19 -- # patch=0 00:06:51.865 00:18:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:51.865 00:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:51.865 00:18:05 version -- app/version.sh@14 -- # cut -f2 00:06:51.865 00:18:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:51.865 00:18:05 version -- app/version.sh@22 -- # version=24.9 00:06:51.865 00:18:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:51.865 00:18:05 version -- app/version.sh@28 -- # version=24.9rc0 00:06:51.865 00:18:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:51.865 00:18:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:51.865 00:18:05 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:51.865 00:18:05 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:51.865 00:06:51.865 real 0m0.174s 00:06:51.865 user 0m0.084s 00:06:51.865 sys 0m0.121s 00:06:51.865 00:18:05 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.865 00:18:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:51.865 ************************************ 00:06:51.865 END TEST version 00:06:51.865 ************************************ 00:06:51.865 00:18:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.865 00:18:05 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@198 -- # uname -s 00:06:51.865 00:18:05 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:51.865 00:18:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:51.865 00:18:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:51.865 00:18:05 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:51.865 00:18:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.865 00:18:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.865 00:18:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:51.865 00:18:05 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:51.865 00:18:05 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:51.865 00:18:05 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.865 00:18:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.865 00:18:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.127 ************************************ 00:06:52.127 START TEST nvmf_tcp 00:06:52.127 ************************************ 00:06:52.127 00:18:05 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.127 * Looking for test storage... 00:06:52.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:52.127 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:52.127 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:52.127 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.127 00:18:05 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:52.127 00:18:05 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.128 00:18:05 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.128 00:18:05 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.128 00:18:05 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.128 00:18:05 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.128 00:18:05 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.128 00:18:05 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.128 00:18:05 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:52.128 00:18:05 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:52.128 00:18:05 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.128 00:18:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:52.128 00:18:05 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:52.128 00:18:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:52.128 00:18:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.128 00:18:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.128 ************************************ 00:06:52.128 START TEST nvmf_example 00:06:52.128 ************************************ 00:06:52.128 00:18:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:52.128 * Looking for test storage... 00:06:52.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.128 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.128 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:52.390 00:18:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:00.532 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:00.532 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:00.532 Found net devices under 0000:31:00.0: cvl_0_0 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:00.532 Found net devices under 0000:31:00.1: cvl_0_1 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.750 ms 00:07:00.532 00:07:00.532 --- 10.0.0.2 ping statistics --- 00:07:00.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.532 rtt min/avg/max/mdev = 0.750/0.750/0.750/0.000 ms 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:07:00.532 00:07:00.532 --- 10.0.0.1 ping statistics --- 00:07:00.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.532 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=886534 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 886534 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 886534 ']' 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.532 00:18:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.471 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.472 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.472 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.472 00:18:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.472 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:01.472 00:18:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:01.472 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.726 Initializing NVMe Controllers 00:07:13.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:13.726 Initialization complete. Launching workers. 00:07:13.726 ======================================================== 00:07:13.726 Latency(us) 00:07:13.726 Device Information : IOPS MiB/s Average min max 00:07:13.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18636.55 72.80 3433.71 741.17 15568.09 00:07:13.726 ======================================================== 00:07:13.726 Total : 18636.55 72.80 3433.71 741.17 15568.09 00:07:13.726 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.726 rmmod nvme_tcp 00:07:13.726 rmmod nvme_fabrics 00:07:13.726 rmmod nvme_keyring 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 886534 ']' 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 886534 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 886534 ']' 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 886534 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 886534 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 886534' 00:07:13.726 killing process with pid 886534 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 886534 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 886534 00:07:13.726 nvmf threads initialize successfully 00:07:13.726 bdev subsystem init successfully 00:07:13.726 created a nvmf target service 00:07:13.726 create targets's poll groups done 00:07:13.726 all subsystems of target started 00:07:13.726 nvmf target is running 00:07:13.726 all subsystems of target stopped 00:07:13.726 destroy targets's poll groups done 00:07:13.726 destroyed the nvmf target service 00:07:13.726 bdev subsystem finish successfully 00:07:13.726 nvmf threads destroy successfully 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.726 00:18:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.987 00:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:13.987 00:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:13.987 00:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.987 00:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.987 00:07:13.987 real 0m21.936s 00:07:13.987 user 0m47.062s 00:07:13.987 sys 0m7.048s 00:07:13.987 00:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.987 00:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.987 ************************************ 00:07:13.987 END TEST nvmf_example 00:07:13.987 ************************************ 00:07:14.251 00:18:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:14.251 00:18:27 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:14.251 00:18:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.251 00:18:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.251 00:18:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.251 ************************************ 00:07:14.251 START TEST nvmf_filesystem 00:07:14.251 ************************************ 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:14.251 * Looking for test storage... 00:07:14.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:14.251 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:14.252 #define SPDK_CONFIG_H 00:07:14.252 #define SPDK_CONFIG_APPS 1 00:07:14.252 #define SPDK_CONFIG_ARCH native 00:07:14.252 #undef SPDK_CONFIG_ASAN 00:07:14.252 #undef SPDK_CONFIG_AVAHI 00:07:14.252 #undef SPDK_CONFIG_CET 00:07:14.252 #define SPDK_CONFIG_COVERAGE 1 00:07:14.252 #define SPDK_CONFIG_CROSS_PREFIX 00:07:14.252 #undef SPDK_CONFIG_CRYPTO 00:07:14.252 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:14.252 #undef SPDK_CONFIG_CUSTOMOCF 00:07:14.252 #undef SPDK_CONFIG_DAOS 00:07:14.252 #define SPDK_CONFIG_DAOS_DIR 00:07:14.252 #define SPDK_CONFIG_DEBUG 1 00:07:14.252 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:14.252 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:14.252 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:14.252 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:14.252 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:14.252 #undef SPDK_CONFIG_DPDK_UADK 00:07:14.252 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:14.252 #define SPDK_CONFIG_EXAMPLES 1 00:07:14.252 #undef SPDK_CONFIG_FC 00:07:14.252 #define SPDK_CONFIG_FC_PATH 00:07:14.252 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:14.252 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:14.252 #undef SPDK_CONFIG_FUSE 00:07:14.252 #undef SPDK_CONFIG_FUZZER 00:07:14.252 #define SPDK_CONFIG_FUZZER_LIB 00:07:14.252 #undef SPDK_CONFIG_GOLANG 00:07:14.252 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:14.252 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:14.252 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:14.252 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:14.252 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:14.252 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:14.252 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:14.252 #define SPDK_CONFIG_IDXD 1 00:07:14.252 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:14.252 #undef SPDK_CONFIG_IPSEC_MB 00:07:14.252 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:14.252 #define SPDK_CONFIG_ISAL 1 00:07:14.252 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:14.252 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:14.252 #define SPDK_CONFIG_LIBDIR 00:07:14.252 #undef SPDK_CONFIG_LTO 00:07:14.252 #define SPDK_CONFIG_MAX_LCORES 128 00:07:14.252 #define SPDK_CONFIG_NVME_CUSE 1 00:07:14.252 #undef SPDK_CONFIG_OCF 00:07:14.252 #define SPDK_CONFIG_OCF_PATH 00:07:14.252 #define SPDK_CONFIG_OPENSSL_PATH 00:07:14.252 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:14.252 #define SPDK_CONFIG_PGO_DIR 00:07:14.252 #undef SPDK_CONFIG_PGO_USE 00:07:14.252 #define SPDK_CONFIG_PREFIX /usr/local 00:07:14.252 #undef SPDK_CONFIG_RAID5F 00:07:14.252 #undef SPDK_CONFIG_RBD 00:07:14.252 #define SPDK_CONFIG_RDMA 1 00:07:14.252 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:14.252 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:14.252 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:14.252 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:14.252 #define SPDK_CONFIG_SHARED 1 00:07:14.252 #undef SPDK_CONFIG_SMA 00:07:14.252 #define SPDK_CONFIG_TESTS 1 00:07:14.252 #undef SPDK_CONFIG_TSAN 00:07:14.252 #define SPDK_CONFIG_UBLK 1 00:07:14.252 #define SPDK_CONFIG_UBSAN 1 00:07:14.252 #undef SPDK_CONFIG_UNIT_TESTS 00:07:14.252 #undef SPDK_CONFIG_URING 00:07:14.252 #define SPDK_CONFIG_URING_PATH 00:07:14.252 #undef SPDK_CONFIG_URING_ZNS 00:07:14.252 #undef SPDK_CONFIG_USDT 00:07:14.252 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:14.252 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:14.252 #define SPDK_CONFIG_VFIO_USER 1 00:07:14.252 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:14.252 #define SPDK_CONFIG_VHOST 1 00:07:14.252 #define SPDK_CONFIG_VIRTIO 1 00:07:14.252 #undef SPDK_CONFIG_VTUNE 00:07:14.252 #define SPDK_CONFIG_VTUNE_DIR 00:07:14.252 #define SPDK_CONFIG_WERROR 1 00:07:14.252 #define SPDK_CONFIG_WPDK_DIR 00:07:14.252 #undef SPDK_CONFIG_XNVME 00:07:14.252 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.252 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:14.253 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 889345 ]] 00:07:14.254 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 889345 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.1Br7ZF 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1Br7ZF/tests/target /tmp/spdk.1Br7ZF 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122735808512 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6635171840 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864253440 00:07:14.517 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9945088 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683573248 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1916928 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:14.518 * Looking for test storage... 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122735808512 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8849764352 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.518 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.519 00:18:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.656 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.656 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.656 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:22.657 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:22.657 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:22.657 Found net devices under 0000:31:00.0: cvl_0_0 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:22.657 Found net devices under 0000:31:00.1: cvl_0_1 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.657 00:18:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:07:22.657 00:07:22.657 --- 10.0.0.2 ping statistics --- 00:07:22.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.657 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:07:22.657 00:07:22.657 --- 10.0.0.1 ping statistics --- 00:07:22.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.657 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.657 ************************************ 00:07:22.657 START TEST nvmf_filesystem_no_in_capsule 00:07:22.657 ************************************ 00:07:22.657 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=893561 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 893561 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 893561 ']' 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.658 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.658 [2024-07-16 00:18:36.196858] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:22.658 [2024-07-16 00:18:36.196907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.658 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.658 [2024-07-16 00:18:36.272677] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.918 [2024-07-16 00:18:36.343268] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.918 [2024-07-16 00:18:36.343304] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.918 [2024-07-16 00:18:36.343311] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.918 [2024-07-16 00:18:36.343318] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.918 [2024-07-16 00:18:36.343323] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.918 [2024-07-16 00:18:36.343408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.918 [2024-07-16 00:18:36.343533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.918 [2024-07-16 00:18:36.343680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.918 [2024-07-16 00:18:36.343681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.489 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.489 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:23.489 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.489 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.489 00:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.489 [2024-07-16 00:18:37.017901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.489 Malloc1 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.489 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.749 [2024-07-16 00:18:37.145543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:23.749 { 00:07:23.749 "name": "Malloc1", 00:07:23.749 "aliases": [ 00:07:23.749 "e8a3ecba-affc-48f2-b5af-ee06249b9c37" 00:07:23.749 ], 00:07:23.749 "product_name": "Malloc disk", 00:07:23.749 "block_size": 512, 00:07:23.749 "num_blocks": 1048576, 00:07:23.749 "uuid": "e8a3ecba-affc-48f2-b5af-ee06249b9c37", 00:07:23.749 "assigned_rate_limits": { 00:07:23.749 "rw_ios_per_sec": 0, 00:07:23.749 "rw_mbytes_per_sec": 0, 00:07:23.749 "r_mbytes_per_sec": 0, 00:07:23.749 "w_mbytes_per_sec": 0 00:07:23.749 }, 00:07:23.749 "claimed": true, 00:07:23.749 "claim_type": "exclusive_write", 00:07:23.749 "zoned": false, 00:07:23.749 "supported_io_types": { 00:07:23.749 "read": true, 00:07:23.749 "write": true, 00:07:23.749 "unmap": true, 00:07:23.749 "flush": true, 00:07:23.749 "reset": true, 00:07:23.749 "nvme_admin": false, 00:07:23.749 "nvme_io": false, 00:07:23.749 "nvme_io_md": false, 00:07:23.749 "write_zeroes": true, 00:07:23.749 "zcopy": true, 00:07:23.749 "get_zone_info": false, 00:07:23.749 "zone_management": false, 00:07:23.749 "zone_append": false, 00:07:23.749 "compare": false, 00:07:23.749 "compare_and_write": false, 00:07:23.749 "abort": true, 00:07:23.749 "seek_hole": false, 00:07:23.749 "seek_data": false, 00:07:23.749 "copy": true, 00:07:23.749 "nvme_iov_md": false 00:07:23.749 }, 00:07:23.749 "memory_domains": [ 00:07:23.749 { 00:07:23.749 "dma_device_id": "system", 00:07:23.749 "dma_device_type": 1 00:07:23.749 }, 00:07:23.749 { 00:07:23.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.749 "dma_device_type": 2 00:07:23.749 } 00:07:23.749 ], 00:07:23.749 "driver_specific": {} 00:07:23.749 } 00:07:23.749 ]' 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:23.749 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.750 00:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.661 00:18:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.661 00:18:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:25.661 00:18:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.661 00:18:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:25.661 00:18:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:27.689 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.690 00:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.690 00:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:28.261 00:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.646 ************************************ 00:07:29.646 START TEST filesystem_ext4 00:07:29.646 ************************************ 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:29.646 00:18:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:29.646 mke2fs 1.46.5 (30-Dec-2021) 00:07:29.646 Discarding device blocks: 0/522240 done 00:07:29.646 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:29.646 Filesystem UUID: 9b1fda06-be65-4a31-a2c3-a014b9e484f8 00:07:29.646 Superblock backups stored on blocks: 00:07:29.646 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:29.646 00:07:29.646 Allocating group tables: 0/64 done 00:07:29.646 Writing inode tables: 0/64 done 00:07:32.186 Creating journal (8192 blocks): done 00:07:33.019 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:33.019 00:07:33.019 00:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:33.019 00:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 893561 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.962 00:07:33.962 real 0m4.461s 00:07:33.962 user 0m0.026s 00:07:33.962 sys 0m0.052s 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:33.962 ************************************ 00:07:33.962 END TEST filesystem_ext4 00:07:33.962 ************************************ 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.962 ************************************ 00:07:33.962 START TEST filesystem_btrfs 00:07:33.962 ************************************ 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:33.962 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:34.532 btrfs-progs v6.6.2 00:07:34.532 See https://btrfs.readthedocs.io for more information. 00:07:34.532 00:07:34.532 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:34.532 NOTE: several default settings have changed in version 5.15, please make sure 00:07:34.532 this does not affect your deployments: 00:07:34.532 - DUP for metadata (-m dup) 00:07:34.532 - enabled no-holes (-O no-holes) 00:07:34.532 - enabled free-space-tree (-R free-space-tree) 00:07:34.532 00:07:34.533 Label: (null) 00:07:34.533 UUID: f0e64c4f-fd2b-464f-a0f5-8879053d6c85 00:07:34.533 Node size: 16384 00:07:34.533 Sector size: 4096 00:07:34.533 Filesystem size: 510.00MiB 00:07:34.533 Block group profiles: 00:07:34.533 Data: single 8.00MiB 00:07:34.533 Metadata: DUP 32.00MiB 00:07:34.533 System: DUP 8.00MiB 00:07:34.533 SSD detected: yes 00:07:34.533 Zoned device: no 00:07:34.533 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:34.533 Runtime features: free-space-tree 00:07:34.533 Checksum: crc32c 00:07:34.533 Number of devices: 1 00:07:34.533 Devices: 00:07:34.533 ID SIZE PATH 00:07:34.533 1 510.00MiB /dev/nvme0n1p1 00:07:34.533 00:07:34.533 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:34.533 00:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 893561 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.105 00:07:35.105 real 0m1.109s 00:07:35.105 user 0m0.024s 00:07:35.105 sys 0m0.074s 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:35.105 ************************************ 00:07:35.105 END TEST filesystem_btrfs 00:07:35.105 ************************************ 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.105 ************************************ 00:07:35.105 START TEST filesystem_xfs 00:07:35.105 ************************************ 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:35.105 00:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:35.105 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:35.105 = sectsz=512 attr=2, projid32bit=1 00:07:35.105 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:35.105 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:35.105 data = bsize=4096 blocks=130560, imaxpct=25 00:07:35.105 = sunit=0 swidth=0 blks 00:07:35.105 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:35.105 log =internal log bsize=4096 blocks=16384, version=2 00:07:35.105 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:35.105 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:36.047 Discarding blocks...Done. 00:07:36.047 00:18:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:36.047 00:18:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 893561 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.955 00:07:37.955 real 0m2.818s 00:07:37.955 user 0m0.028s 00:07:37.955 sys 0m0.053s 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:37.955 ************************************ 00:07:37.955 END TEST filesystem_xfs 00:07:37.955 ************************************ 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:37.955 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:38.525 00:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:38.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 893561 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 893561 ']' 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 893561 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 893561 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:38.525 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 893561' 00:07:38.525 killing process with pid 893561 00:07:38.526 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 893561 00:07:38.526 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 893561 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:38.785 00:07:38.785 real 0m16.207s 00:07:38.785 user 1m4.011s 00:07:38.785 sys 0m1.078s 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.785 ************************************ 00:07:38.785 END TEST nvmf_filesystem_no_in_capsule 00:07:38.785 ************************************ 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.785 00:18:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.045 ************************************ 00:07:39.045 START TEST nvmf_filesystem_in_capsule 00:07:39.045 ************************************ 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=896916 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 896916 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 896916 ']' 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.045 00:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.045 [2024-07-16 00:18:52.481145] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:39.045 [2024-07-16 00:18:52.481194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.045 [2024-07-16 00:18:52.555266] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.045 [2024-07-16 00:18:52.624143] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.045 [2024-07-16 00:18:52.624180] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.045 [2024-07-16 00:18:52.624188] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.045 [2024-07-16 00:18:52.624194] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.045 [2024-07-16 00:18:52.624200] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.045 [2024-07-16 00:18:52.624284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.045 [2024-07-16 00:18:52.624494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.045 [2024-07-16 00:18:52.624498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.045 [2024-07-16 00:18:52.624337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.982 [2024-07-16 00:18:53.299900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.982 Malloc1 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.982 [2024-07-16 00:18:53.426487] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:39.982 { 00:07:39.982 "name": "Malloc1", 00:07:39.982 "aliases": [ 00:07:39.982 "05a9398a-3195-4f9a-aaf1-77a70558359a" 00:07:39.982 ], 00:07:39.982 "product_name": "Malloc disk", 00:07:39.982 "block_size": 512, 00:07:39.982 "num_blocks": 1048576, 00:07:39.982 "uuid": "05a9398a-3195-4f9a-aaf1-77a70558359a", 00:07:39.982 "assigned_rate_limits": { 00:07:39.982 "rw_ios_per_sec": 0, 00:07:39.982 "rw_mbytes_per_sec": 0, 00:07:39.982 "r_mbytes_per_sec": 0, 00:07:39.982 "w_mbytes_per_sec": 0 00:07:39.982 }, 00:07:39.982 "claimed": true, 00:07:39.982 "claim_type": "exclusive_write", 00:07:39.982 "zoned": false, 00:07:39.982 "supported_io_types": { 00:07:39.982 "read": true, 00:07:39.982 "write": true, 00:07:39.982 "unmap": true, 00:07:39.982 "flush": true, 00:07:39.982 "reset": true, 00:07:39.982 "nvme_admin": false, 00:07:39.982 "nvme_io": false, 00:07:39.982 "nvme_io_md": false, 00:07:39.982 "write_zeroes": true, 00:07:39.982 "zcopy": true, 00:07:39.982 "get_zone_info": false, 00:07:39.982 "zone_management": false, 00:07:39.982 "zone_append": false, 00:07:39.982 "compare": false, 00:07:39.982 "compare_and_write": false, 00:07:39.982 "abort": true, 00:07:39.982 "seek_hole": false, 00:07:39.982 "seek_data": false, 00:07:39.982 "copy": true, 00:07:39.982 "nvme_iov_md": false 00:07:39.982 }, 00:07:39.982 "memory_domains": [ 00:07:39.982 { 00:07:39.982 "dma_device_id": "system", 00:07:39.982 "dma_device_type": 1 00:07:39.982 }, 00:07:39.982 { 00:07:39.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.982 "dma_device_type": 2 00:07:39.982 } 00:07:39.982 ], 00:07:39.982 "driver_specific": {} 00:07:39.982 } 00:07:39.982 ]' 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:39.982 00:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.888 00:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.888 00:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:41.888 00:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.888 00:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:41.888 00:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:43.801 00:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:43.801 00:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:43.801 00:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:43.801 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:44.373 00:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.316 ************************************ 00:07:45.316 START TEST filesystem_in_capsule_ext4 00:07:45.316 ************************************ 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:45.316 00:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:45.316 mke2fs 1.46.5 (30-Dec-2021) 00:07:45.316 Discarding device blocks: 0/522240 done 00:07:45.316 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:45.316 Filesystem UUID: 0909334d-20e2-401b-8797-cef0473dd06e 00:07:45.316 Superblock backups stored on blocks: 00:07:45.316 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:45.316 00:07:45.316 Allocating group tables: 0/64 done 00:07:45.316 Writing inode tables: 0/64 done 00:07:45.576 Creating journal (8192 blocks): done 00:07:46.406 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:46.406 00:07:46.406 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:46.406 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 896916 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.978 00:07:46.978 real 0m1.708s 00:07:46.978 user 0m0.025s 00:07:46.978 sys 0m0.051s 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 ************************************ 00:07:46.978 END TEST filesystem_in_capsule_ext4 00:07:46.978 ************************************ 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.978 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.239 ************************************ 00:07:47.239 START TEST filesystem_in_capsule_btrfs 00:07:47.239 ************************************ 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:47.239 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:47.500 btrfs-progs v6.6.2 00:07:47.500 See https://btrfs.readthedocs.io for more information. 00:07:47.500 00:07:47.500 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:47.500 NOTE: several default settings have changed in version 5.15, please make sure 00:07:47.500 this does not affect your deployments: 00:07:47.500 - DUP for metadata (-m dup) 00:07:47.500 - enabled no-holes (-O no-holes) 00:07:47.500 - enabled free-space-tree (-R free-space-tree) 00:07:47.500 00:07:47.500 Label: (null) 00:07:47.500 UUID: 59c8ec7f-28bb-41e1-bd86-d482ddb43a0d 00:07:47.500 Node size: 16384 00:07:47.500 Sector size: 4096 00:07:47.500 Filesystem size: 510.00MiB 00:07:47.500 Block group profiles: 00:07:47.500 Data: single 8.00MiB 00:07:47.500 Metadata: DUP 32.00MiB 00:07:47.500 System: DUP 8.00MiB 00:07:47.500 SSD detected: yes 00:07:47.500 Zoned device: no 00:07:47.500 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:47.500 Runtime features: free-space-tree 00:07:47.500 Checksum: crc32c 00:07:47.500 Number of devices: 1 00:07:47.500 Devices: 00:07:47.500 ID SIZE PATH 00:07:47.500 1 510.00MiB /dev/nvme0n1p1 00:07:47.500 00:07:47.500 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:47.500 00:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 896916 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.500 00:07:47.500 real 0m0.485s 00:07:47.500 user 0m0.025s 00:07:47.500 sys 0m0.066s 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.500 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.500 ************************************ 00:07:47.500 END TEST filesystem_in_capsule_btrfs 00:07:47.500 ************************************ 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.761 ************************************ 00:07:47.761 START TEST filesystem_in_capsule_xfs 00:07:47.761 ************************************ 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:47.761 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:47.762 00:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:47.762 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:47.762 = sectsz=512 attr=2, projid32bit=1 00:07:47.762 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:47.762 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:47.762 data = bsize=4096 blocks=130560, imaxpct=25 00:07:47.762 = sunit=0 swidth=0 blks 00:07:47.762 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:47.762 log =internal log bsize=4096 blocks=16384, version=2 00:07:47.762 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:47.762 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.703 Discarding blocks...Done. 00:07:48.703 00:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:48.703 00:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.617 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.617 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:50.617 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.617 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 896916 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.878 00:07:50.878 real 0m3.111s 00:07:50.878 user 0m0.025s 00:07:50.878 sys 0m0.055s 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:50.878 ************************************ 00:07:50.878 END TEST filesystem_in_capsule_xfs 00:07:50.878 ************************************ 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:50.878 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 896916 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 896916 ']' 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 896916 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 896916 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 896916' 00:07:51.451 killing process with pid 896916 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 896916 00:07:51.451 00:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 896916 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.713 00:07:51.713 real 0m12.797s 00:07:51.713 user 0m50.428s 00:07:51.713 sys 0m1.066s 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.713 ************************************ 00:07:51.713 END TEST nvmf_filesystem_in_capsule 00:07:51.713 ************************************ 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.713 rmmod nvme_tcp 00:07:51.713 rmmod nvme_fabrics 00:07:51.713 rmmod nvme_keyring 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.713 00:19:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.261 00:19:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:54.261 00:07:54.261 real 0m39.726s 00:07:54.261 user 1m56.875s 00:07:54.261 sys 0m8.347s 00:07:54.261 00:19:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.261 00:19:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.261 ************************************ 00:07:54.261 END TEST nvmf_filesystem 00:07:54.261 ************************************ 00:07:54.261 00:19:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:54.261 00:19:07 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:54.261 00:19:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.261 00:19:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.261 00:19:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.261 ************************************ 00:07:54.261 START TEST nvmf_target_discovery 00:07:54.261 ************************************ 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:54.261 * Looking for test storage... 00:07:54.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.261 00:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:02.426 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:02.426 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.426 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:02.427 Found net devices under 0000:31:00.0: cvl_0_0 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:02.427 Found net devices under 0000:31:00.1: cvl_0_1 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:08:02.427 00:08:02.427 --- 10.0.0.2 ping statistics --- 00:08:02.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.427 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:08:02.427 00:08:02.427 --- 10.0.0.1 ping statistics --- 00:08:02.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.427 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=904423 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 904423 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 904423 ']' 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.427 00:19:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.427 [2024-07-16 00:19:15.783543] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:02.427 [2024-07-16 00:19:15.783635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.427 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.427 [2024-07-16 00:19:15.863987] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.427 [2024-07-16 00:19:15.938617] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.427 [2024-07-16 00:19:15.938656] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.427 [2024-07-16 00:19:15.938664] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.427 [2024-07-16 00:19:15.938671] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.427 [2024-07-16 00:19:15.938676] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.427 [2024-07-16 00:19:15.938818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.427 [2024-07-16 00:19:15.938932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.427 [2024-07-16 00:19:15.939088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.427 [2024-07-16 00:19:15.939089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.012 [2024-07-16 00:19:16.603875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.012 Null1 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.012 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 [2024-07-16 00:19:16.664181] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 Null2 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 Null3 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 Null4 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.273 00:19:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:03.534 00:08:03.534 Discovery Log Number of Records 6, Generation counter 6 00:08:03.534 =====Discovery Log Entry 0====== 00:08:03.534 trtype: tcp 00:08:03.534 adrfam: ipv4 00:08:03.534 subtype: current discovery subsystem 00:08:03.534 treq: not required 00:08:03.534 portid: 0 00:08:03.534 trsvcid: 4420 00:08:03.534 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:03.534 traddr: 10.0.0.2 00:08:03.534 eflags: explicit discovery connections, duplicate discovery information 00:08:03.534 sectype: none 00:08:03.534 =====Discovery Log Entry 1====== 00:08:03.534 trtype: tcp 00:08:03.534 adrfam: ipv4 00:08:03.534 subtype: nvme subsystem 00:08:03.534 treq: not required 00:08:03.534 portid: 0 00:08:03.534 trsvcid: 4420 00:08:03.534 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:03.534 traddr: 10.0.0.2 00:08:03.534 eflags: none 00:08:03.534 sectype: none 00:08:03.534 =====Discovery Log Entry 2====== 00:08:03.534 trtype: tcp 00:08:03.534 adrfam: ipv4 00:08:03.534 subtype: nvme subsystem 00:08:03.534 treq: not required 00:08:03.534 portid: 0 00:08:03.534 trsvcid: 4420 00:08:03.534 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:03.534 traddr: 10.0.0.2 00:08:03.534 eflags: none 00:08:03.534 sectype: none 00:08:03.534 =====Discovery Log Entry 3====== 00:08:03.534 trtype: tcp 00:08:03.534 adrfam: ipv4 00:08:03.534 subtype: nvme subsystem 00:08:03.534 treq: not required 00:08:03.534 portid: 0 00:08:03.534 trsvcid: 4420 00:08:03.534 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:03.534 traddr: 10.0.0.2 00:08:03.534 eflags: none 00:08:03.534 sectype: none 00:08:03.534 =====Discovery Log Entry 4====== 00:08:03.534 trtype: tcp 00:08:03.534 adrfam: ipv4 00:08:03.534 subtype: nvme subsystem 00:08:03.534 treq: not required 00:08:03.534 portid: 0 00:08:03.534 trsvcid: 4420 00:08:03.534 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:03.534 traddr: 10.0.0.2 00:08:03.534 eflags: none 00:08:03.534 sectype: none 00:08:03.534 =====Discovery Log Entry 5====== 00:08:03.534 trtype: tcp 00:08:03.534 adrfam: ipv4 00:08:03.534 subtype: discovery subsystem referral 00:08:03.534 treq: not required 00:08:03.534 portid: 0 00:08:03.534 trsvcid: 4430 00:08:03.534 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:03.534 traddr: 10.0.0.2 00:08:03.534 eflags: none 00:08:03.534 sectype: none 00:08:03.534 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:03.534 Perform nvmf subsystem discovery via RPC 00:08:03.534 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:03.534 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.534 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.534 [ 00:08:03.534 { 00:08:03.534 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:03.534 "subtype": "Discovery", 00:08:03.534 "listen_addresses": [ 00:08:03.534 { 00:08:03.534 "trtype": "TCP", 00:08:03.534 "adrfam": "IPv4", 00:08:03.534 "traddr": "10.0.0.2", 00:08:03.534 "trsvcid": "4420" 00:08:03.534 } 00:08:03.534 ], 00:08:03.534 "allow_any_host": true, 00:08:03.534 "hosts": [] 00:08:03.534 }, 00:08:03.534 { 00:08:03.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.534 "subtype": "NVMe", 00:08:03.534 "listen_addresses": [ 00:08:03.534 { 00:08:03.534 "trtype": "TCP", 00:08:03.534 "adrfam": "IPv4", 00:08:03.534 "traddr": "10.0.0.2", 00:08:03.534 "trsvcid": "4420" 00:08:03.534 } 00:08:03.534 ], 00:08:03.534 "allow_any_host": true, 00:08:03.534 "hosts": [], 00:08:03.534 "serial_number": "SPDK00000000000001", 00:08:03.534 "model_number": "SPDK bdev Controller", 00:08:03.534 "max_namespaces": 32, 00:08:03.534 "min_cntlid": 1, 00:08:03.534 "max_cntlid": 65519, 00:08:03.535 "namespaces": [ 00:08:03.535 { 00:08:03.535 "nsid": 1, 00:08:03.535 "bdev_name": "Null1", 00:08:03.535 "name": "Null1", 00:08:03.535 "nguid": "F2B0C8EBE0784E89B7934A8CE6FE917C", 00:08:03.535 "uuid": "f2b0c8eb-e078-4e89-b793-4a8ce6fe917c" 00:08:03.535 } 00:08:03.535 ] 00:08:03.535 }, 00:08:03.535 { 00:08:03.535 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:03.535 "subtype": "NVMe", 00:08:03.535 "listen_addresses": [ 00:08:03.535 { 00:08:03.535 "trtype": "TCP", 00:08:03.535 "adrfam": "IPv4", 00:08:03.535 "traddr": "10.0.0.2", 00:08:03.535 "trsvcid": "4420" 00:08:03.535 } 00:08:03.535 ], 00:08:03.535 "allow_any_host": true, 00:08:03.535 "hosts": [], 00:08:03.535 "serial_number": "SPDK00000000000002", 00:08:03.535 "model_number": "SPDK bdev Controller", 00:08:03.535 "max_namespaces": 32, 00:08:03.535 "min_cntlid": 1, 00:08:03.535 "max_cntlid": 65519, 00:08:03.535 "namespaces": [ 00:08:03.535 { 00:08:03.535 "nsid": 1, 00:08:03.535 "bdev_name": "Null2", 00:08:03.535 "name": "Null2", 00:08:03.535 "nguid": "60EFF66A9A9C4054A0A9FB571563922A", 00:08:03.535 "uuid": "60eff66a-9a9c-4054-a0a9-fb571563922a" 00:08:03.535 } 00:08:03.535 ] 00:08:03.535 }, 00:08:03.535 { 00:08:03.535 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:03.535 "subtype": "NVMe", 00:08:03.535 "listen_addresses": [ 00:08:03.535 { 00:08:03.535 "trtype": "TCP", 00:08:03.535 "adrfam": "IPv4", 00:08:03.535 "traddr": "10.0.0.2", 00:08:03.535 "trsvcid": "4420" 00:08:03.535 } 00:08:03.535 ], 00:08:03.535 "allow_any_host": true, 00:08:03.535 "hosts": [], 00:08:03.535 "serial_number": "SPDK00000000000003", 00:08:03.535 "model_number": "SPDK bdev Controller", 00:08:03.535 "max_namespaces": 32, 00:08:03.535 "min_cntlid": 1, 00:08:03.535 "max_cntlid": 65519, 00:08:03.535 "namespaces": [ 00:08:03.535 { 00:08:03.535 "nsid": 1, 00:08:03.535 "bdev_name": "Null3", 00:08:03.535 "name": "Null3", 00:08:03.535 "nguid": "4F9DD6077E4F4C96B24C7E3914A11CD1", 00:08:03.535 "uuid": "4f9dd607-7e4f-4c96-b24c-7e3914a11cd1" 00:08:03.535 } 00:08:03.535 ] 00:08:03.535 }, 00:08:03.535 { 00:08:03.535 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:03.535 "subtype": "NVMe", 00:08:03.535 "listen_addresses": [ 00:08:03.535 { 00:08:03.535 "trtype": "TCP", 00:08:03.535 "adrfam": "IPv4", 00:08:03.535 "traddr": "10.0.0.2", 00:08:03.535 "trsvcid": "4420" 00:08:03.535 } 00:08:03.535 ], 00:08:03.535 "allow_any_host": true, 00:08:03.535 "hosts": [], 00:08:03.535 "serial_number": "SPDK00000000000004", 00:08:03.535 "model_number": "SPDK bdev Controller", 00:08:03.535 "max_namespaces": 32, 00:08:03.535 "min_cntlid": 1, 00:08:03.535 "max_cntlid": 65519, 00:08:03.535 "namespaces": [ 00:08:03.535 { 00:08:03.535 "nsid": 1, 00:08:03.535 "bdev_name": "Null4", 00:08:03.535 "name": "Null4", 00:08:03.535 "nguid": "9154EFD045AC4337AC268A62B7840B15", 00:08:03.535 "uuid": "9154efd0-45ac-4337-ac26-8a62b7840b15" 00:08:03.535 } 00:08:03.535 ] 00:08:03.535 } 00:08:03.535 ] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.795 rmmod nvme_tcp 00:08:03.795 rmmod nvme_fabrics 00:08:03.795 rmmod nvme_keyring 00:08:03.795 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 904423 ']' 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 904423 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 904423 ']' 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 904423 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 904423 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 904423' 00:08:03.796 killing process with pid 904423 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 904423 00:08:03.796 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 904423 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.056 00:19:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.969 00:19:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:05.969 00:08:05.969 real 0m12.052s 00:08:05.969 user 0m8.418s 00:08:05.969 sys 0m6.384s 00:08:05.969 00:19:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.969 00:19:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.970 ************************************ 00:08:05.970 END TEST nvmf_target_discovery 00:08:05.970 ************************************ 00:08:05.970 00:19:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:05.970 00:19:19 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:05.970 00:19:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.970 00:19:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.970 00:19:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.970 ************************************ 00:08:05.970 START TEST nvmf_referrals 00:08:05.970 ************************************ 00:08:05.970 00:19:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:06.230 * Looking for test storage... 00:08:06.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.230 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.231 00:19:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.372 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:14.373 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:14.373 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:14.373 Found net devices under 0000:31:00.0: cvl_0_0 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:14.373 Found net devices under 0000:31:00.1: cvl_0_1 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:08:14.373 00:08:14.373 --- 10.0.0.2 ping statistics --- 00:08:14.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.373 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:08:14.373 00:08:14.373 --- 10.0.0.1 ping statistics --- 00:08:14.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.373 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=909451 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 909451 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 909451 ']' 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.373 00:19:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.373 [2024-07-16 00:19:27.959313] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:14.373 [2024-07-16 00:19:27.959401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.373 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.632 [2024-07-16 00:19:28.041472] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.632 [2024-07-16 00:19:28.116701] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.632 [2024-07-16 00:19:28.116744] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.632 [2024-07-16 00:19:28.116752] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.632 [2024-07-16 00:19:28.116758] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.632 [2024-07-16 00:19:28.116764] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.632 [2024-07-16 00:19:28.116914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.632 [2024-07-16 00:19:28.117031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.632 [2024-07-16 00:19:28.117187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.632 [2024-07-16 00:19:28.117188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.201 [2024-07-16 00:19:28.778847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.201 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.202 [2024-07-16 00:19:28.795064] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.202 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.461 00:19:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:15.721 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.722 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.981 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.241 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:16.502 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:16.502 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.502 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:16.502 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:16.502 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:16.502 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.502 00:19:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:16.502 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.762 rmmod nvme_tcp 00:08:16.762 rmmod nvme_fabrics 00:08:16.762 rmmod nvme_keyring 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 909451 ']' 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 909451 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 909451 ']' 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 909451 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 909451 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:16.762 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 909451' 00:08:17.022 killing process with pid 909451 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 909451 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 909451 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.022 00:19:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.563 00:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.563 00:08:19.563 real 0m13.014s 00:08:19.563 user 0m12.850s 00:08:19.563 sys 0m6.586s 00:08:19.563 00:19:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.563 00:19:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.564 ************************************ 00:08:19.564 END TEST nvmf_referrals 00:08:19.564 ************************************ 00:08:19.564 00:19:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:19.564 00:19:32 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:19.564 00:19:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.564 00:19:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.564 00:19:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.564 ************************************ 00:08:19.564 START TEST nvmf_connect_disconnect 00:08:19.564 ************************************ 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:19.564 * Looking for test storage... 00:08:19.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.564 00:19:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:27.695 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:27.695 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:27.695 Found net devices under 0000:31:00.0: cvl_0_0 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:27.695 Found net devices under 0000:31:00.1: cvl_0_1 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:08:27.695 00:08:27.695 --- 10.0.0.2 ping statistics --- 00:08:27.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.695 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:08:27.695 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:08:27.696 00:08:27.696 --- 10.0.0.1 ping statistics --- 00:08:27.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.696 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=914668 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 914668 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 914668 ']' 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.696 00:19:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.696 [2024-07-16 00:19:40.938228] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:27.696 [2024-07-16 00:19:40.938290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.696 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.696 [2024-07-16 00:19:41.004965] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.696 [2024-07-16 00:19:41.072047] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.696 [2024-07-16 00:19:41.072083] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.696 [2024-07-16 00:19:41.072091] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.696 [2024-07-16 00:19:41.072098] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.696 [2024-07-16 00:19:41.072104] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.696 [2024-07-16 00:19:41.072252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.696 [2024-07-16 00:19:41.072350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.696 [2024-07-16 00:19:41.072483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.696 [2024-07-16 00:19:41.072485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.265 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.265 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:28.265 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.265 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.265 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.265 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 [2024-07-16 00:19:41.786092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 [2024-07-16 00:19:41.845449] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:28.266 00:19:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:32.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.576 rmmod nvme_tcp 00:08:46.576 rmmod nvme_fabrics 00:08:46.576 rmmod nvme_keyring 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 914668 ']' 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 914668 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 914668 ']' 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 914668 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:46.576 00:19:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 914668 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 914668' 00:08:46.576 killing process with pid 914668 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 914668 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 914668 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.576 00:20:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.119 00:20:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.119 00:08:49.119 real 0m29.591s 00:08:49.119 user 1m18.500s 00:08:49.119 sys 0m6.991s 00:08:49.119 00:20:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.119 00:20:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.119 ************************************ 00:08:49.119 END TEST nvmf_connect_disconnect 00:08:49.119 ************************************ 00:08:49.119 00:20:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:49.119 00:20:02 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:49.119 00:20:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.119 00:20:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.119 00:20:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:49.119 ************************************ 00:08:49.119 START TEST nvmf_multitarget 00:08:49.119 ************************************ 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:49.119 * Looking for test storage... 00:08:49.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.119 00:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.260 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:57.261 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:57.261 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:57.261 Found net devices under 0000:31:00.0: cvl_0_0 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:57.261 Found net devices under 0000:31:00.1: cvl_0_1 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:08:57.261 00:08:57.261 --- 10.0.0.2 ping statistics --- 00:08:57.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.261 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:57.261 00:08:57.261 --- 10.0.0.1 ping statistics --- 00:08:57.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.261 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=923213 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 923213 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 923213 ']' 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.261 00:20:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:57.261 [2024-07-16 00:20:10.761701] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:57.261 [2024-07-16 00:20:10.761753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.261 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.261 [2024-07-16 00:20:10.839013] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.522 [2024-07-16 00:20:10.906517] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.522 [2024-07-16 00:20:10.906551] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.522 [2024-07-16 00:20:10.906558] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.522 [2024-07-16 00:20:10.906565] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.522 [2024-07-16 00:20:10.906571] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.522 [2024-07-16 00:20:10.906706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.522 [2024-07-16 00:20:10.906824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.522 [2024-07-16 00:20:10.906979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.522 [2024-07-16 00:20:10.906980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:58.095 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:58.355 "nvmf_tgt_1" 00:08:58.355 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:58.355 "nvmf_tgt_2" 00:08:58.355 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.356 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:58.356 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:58.356 00:20:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:58.615 true 00:08:58.615 00:20:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:58.615 true 00:08:58.615 00:20:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:58.615 00:20:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.877 rmmod nvme_tcp 00:08:58.877 rmmod nvme_fabrics 00:08:58.877 rmmod nvme_keyring 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 923213 ']' 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 923213 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 923213 ']' 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 923213 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 923213 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 923213' 00:08:58.877 killing process with pid 923213 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 923213 00:08:58.877 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 923213 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.139 00:20:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.149 00:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.149 00:09:01.149 real 0m12.241s 00:09:01.149 user 0m9.523s 00:09:01.149 sys 0m6.455s 00:09:01.149 00:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.149 00:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 ************************************ 00:09:01.149 END TEST nvmf_multitarget 00:09:01.149 ************************************ 00:09:01.149 00:20:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.149 00:20:14 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:01.149 00:20:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.149 00:20:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.149 00:20:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 ************************************ 00:09:01.149 START TEST nvmf_rpc 00:09:01.149 ************************************ 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:01.149 * Looking for test storage... 00:09:01.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.149 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.412 00:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:09.561 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:09.561 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:09.561 Found net devices under 0000:31:00.0: cvl_0_0 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:09.561 Found net devices under 0000:31:00.1: cvl_0_1 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:09:09.561 00:09:09.561 --- 10.0.0.2 ping statistics --- 00:09:09.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.561 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:09:09.561 00:09:09.561 --- 10.0.0.1 ping statistics --- 00:09:09.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.561 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=928203 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 928203 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 928203 ']' 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.561 00:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.561 [2024-07-16 00:20:22.811950] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:09.562 [2024-07-16 00:20:22.812022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.562 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.562 [2024-07-16 00:20:22.893494] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.562 [2024-07-16 00:20:22.969369] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.562 [2024-07-16 00:20:22.969410] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.562 [2024-07-16 00:20:22.969418] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.562 [2024-07-16 00:20:22.969424] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.562 [2024-07-16 00:20:22.969430] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.562 [2024-07-16 00:20:22.969568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.562 [2024-07-16 00:20:22.969684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.562 [2024-07-16 00:20:22.969839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.562 [2024-07-16 00:20:22.969840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:10.131 "tick_rate": 2400000000, 00:09:10.131 "poll_groups": [ 00:09:10.131 { 00:09:10.131 "name": "nvmf_tgt_poll_group_000", 00:09:10.131 "admin_qpairs": 0, 00:09:10.131 "io_qpairs": 0, 00:09:10.131 "current_admin_qpairs": 0, 00:09:10.131 "current_io_qpairs": 0, 00:09:10.131 "pending_bdev_io": 0, 00:09:10.131 "completed_nvme_io": 0, 00:09:10.131 "transports": [] 00:09:10.131 }, 00:09:10.131 { 00:09:10.131 "name": "nvmf_tgt_poll_group_001", 00:09:10.131 "admin_qpairs": 0, 00:09:10.131 "io_qpairs": 0, 00:09:10.131 "current_admin_qpairs": 0, 00:09:10.131 "current_io_qpairs": 0, 00:09:10.131 "pending_bdev_io": 0, 00:09:10.131 "completed_nvme_io": 0, 00:09:10.131 "transports": [] 00:09:10.131 }, 00:09:10.131 { 00:09:10.131 "name": "nvmf_tgt_poll_group_002", 00:09:10.131 "admin_qpairs": 0, 00:09:10.131 "io_qpairs": 0, 00:09:10.131 "current_admin_qpairs": 0, 00:09:10.131 "current_io_qpairs": 0, 00:09:10.131 "pending_bdev_io": 0, 00:09:10.131 "completed_nvme_io": 0, 00:09:10.131 "transports": [] 00:09:10.131 }, 00:09:10.131 { 00:09:10.131 "name": "nvmf_tgt_poll_group_003", 00:09:10.131 "admin_qpairs": 0, 00:09:10.131 "io_qpairs": 0, 00:09:10.131 "current_admin_qpairs": 0, 00:09:10.131 "current_io_qpairs": 0, 00:09:10.131 "pending_bdev_io": 0, 00:09:10.131 "completed_nvme_io": 0, 00:09:10.131 "transports": [] 00:09:10.131 } 00:09:10.131 ] 00:09:10.131 }' 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.131 [2024-07-16 00:20:23.747227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.131 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:10.392 "tick_rate": 2400000000, 00:09:10.392 "poll_groups": [ 00:09:10.392 { 00:09:10.392 "name": "nvmf_tgt_poll_group_000", 00:09:10.392 "admin_qpairs": 0, 00:09:10.392 "io_qpairs": 0, 00:09:10.392 "current_admin_qpairs": 0, 00:09:10.392 "current_io_qpairs": 0, 00:09:10.392 "pending_bdev_io": 0, 00:09:10.392 "completed_nvme_io": 0, 00:09:10.392 "transports": [ 00:09:10.392 { 00:09:10.392 "trtype": "TCP" 00:09:10.392 } 00:09:10.392 ] 00:09:10.392 }, 00:09:10.392 { 00:09:10.392 "name": "nvmf_tgt_poll_group_001", 00:09:10.392 "admin_qpairs": 0, 00:09:10.392 "io_qpairs": 0, 00:09:10.392 "current_admin_qpairs": 0, 00:09:10.392 "current_io_qpairs": 0, 00:09:10.392 "pending_bdev_io": 0, 00:09:10.392 "completed_nvme_io": 0, 00:09:10.392 "transports": [ 00:09:10.392 { 00:09:10.392 "trtype": "TCP" 00:09:10.392 } 00:09:10.392 ] 00:09:10.392 }, 00:09:10.392 { 00:09:10.392 "name": "nvmf_tgt_poll_group_002", 00:09:10.392 "admin_qpairs": 0, 00:09:10.392 "io_qpairs": 0, 00:09:10.392 "current_admin_qpairs": 0, 00:09:10.392 "current_io_qpairs": 0, 00:09:10.392 "pending_bdev_io": 0, 00:09:10.392 "completed_nvme_io": 0, 00:09:10.392 "transports": [ 00:09:10.392 { 00:09:10.392 "trtype": "TCP" 00:09:10.392 } 00:09:10.392 ] 00:09:10.392 }, 00:09:10.392 { 00:09:10.392 "name": "nvmf_tgt_poll_group_003", 00:09:10.392 "admin_qpairs": 0, 00:09:10.392 "io_qpairs": 0, 00:09:10.392 "current_admin_qpairs": 0, 00:09:10.392 "current_io_qpairs": 0, 00:09:10.392 "pending_bdev_io": 0, 00:09:10.392 "completed_nvme_io": 0, 00:09:10.392 "transports": [ 00:09:10.392 { 00:09:10.392 "trtype": "TCP" 00:09:10.392 } 00:09:10.392 ] 00:09:10.392 } 00:09:10.392 ] 00:09:10.392 }' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.392 Malloc1 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.392 [2024-07-16 00:20:23.935049] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:10.392 [2024-07-16 00:20:23.961749] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:10.392 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:10.392 could not add new controller: failed to write to nvme-fabrics device 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.392 00:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.305 00:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.305 00:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.305 00:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.305 00:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:12.305 00:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:14.218 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.218 [2024-07-16 00:20:27.649663] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:14.218 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:14.219 could not add new controller: failed to write to nvme-fabrics device 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.219 00:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.607 00:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.607 00:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.607 00:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.607 00:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.607 00:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.143 [2024-07-16 00:20:31.342228] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.143 00:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.527 00:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.527 00:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.527 00:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.527 00:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:19.527 00:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:21.442 00:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.442 [2024-07-16 00:20:35.054481] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.442 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.443 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.703 00:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.703 00:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.086 00:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.086 00:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.086 00:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.086 00:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.086 00:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.997 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.997 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.997 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.258 [2024-07-16 00:20:38.761316] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.258 00:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.640 00:20:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.640 00:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.640 00:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.640 00:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.640 00:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.207 [2024-07-16 00:20:42.443869] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.207 00:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.593 00:20:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.593 00:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.593 00:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.593 00:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.593 00:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.503 00:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.503 00:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.503 00:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.503 00:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.503 00:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.503 00:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:32.503 00:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.503 [2024-07-16 00:20:46.112305] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.503 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.763 00:20:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.763 00:20:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.147 00:20:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.147 00:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.147 00:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.147 00:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.147 00:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:36.087 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 [2024-07-16 00:20:49.780267] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 [2024-07-16 00:20:49.840398] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 [2024-07-16 00:20:49.900568] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 [2024-07-16 00:20:49.960753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.348 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 [2024-07-16 00:20:50.021037] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:36.609 "tick_rate": 2400000000, 00:09:36.609 "poll_groups": [ 00:09:36.609 { 00:09:36.609 "name": "nvmf_tgt_poll_group_000", 00:09:36.609 "admin_qpairs": 0, 00:09:36.609 "io_qpairs": 224, 00:09:36.609 "current_admin_qpairs": 0, 00:09:36.609 "current_io_qpairs": 0, 00:09:36.609 "pending_bdev_io": 0, 00:09:36.609 "completed_nvme_io": 520, 00:09:36.609 "transports": [ 00:09:36.609 { 00:09:36.609 "trtype": "TCP" 00:09:36.609 } 00:09:36.609 ] 00:09:36.609 }, 00:09:36.609 { 00:09:36.609 "name": "nvmf_tgt_poll_group_001", 00:09:36.609 "admin_qpairs": 1, 00:09:36.609 "io_qpairs": 223, 00:09:36.609 "current_admin_qpairs": 0, 00:09:36.609 "current_io_qpairs": 0, 00:09:36.609 "pending_bdev_io": 0, 00:09:36.609 "completed_nvme_io": 223, 00:09:36.609 "transports": [ 00:09:36.609 { 00:09:36.609 "trtype": "TCP" 00:09:36.609 } 00:09:36.609 ] 00:09:36.609 }, 00:09:36.609 { 00:09:36.609 "name": "nvmf_tgt_poll_group_002", 00:09:36.609 "admin_qpairs": 6, 00:09:36.609 "io_qpairs": 218, 00:09:36.609 "current_admin_qpairs": 0, 00:09:36.609 "current_io_qpairs": 0, 00:09:36.609 "pending_bdev_io": 0, 00:09:36.609 "completed_nvme_io": 219, 00:09:36.609 "transports": [ 00:09:36.610 { 00:09:36.610 "trtype": "TCP" 00:09:36.610 } 00:09:36.610 ] 00:09:36.610 }, 00:09:36.610 { 00:09:36.610 "name": "nvmf_tgt_poll_group_003", 00:09:36.610 "admin_qpairs": 0, 00:09:36.610 "io_qpairs": 224, 00:09:36.610 "current_admin_qpairs": 0, 00:09:36.610 "current_io_qpairs": 0, 00:09:36.610 "pending_bdev_io": 0, 00:09:36.610 "completed_nvme_io": 277, 00:09:36.610 "transports": [ 00:09:36.610 { 00:09:36.610 "trtype": "TCP" 00:09:36.610 } 00:09:36.610 ] 00:09:36.610 } 00:09:36.610 ] 00:09:36.610 }' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.610 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.610 rmmod nvme_tcp 00:09:36.610 rmmod nvme_fabrics 00:09:36.610 rmmod nvme_keyring 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 928203 ']' 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 928203 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 928203 ']' 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 928203 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 928203 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 928203' 00:09:36.870 killing process with pid 928203 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 928203 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 928203 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.870 00:20:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.412 00:20:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.412 00:09:39.412 real 0m37.853s 00:09:39.412 user 1m52.118s 00:09:39.412 sys 0m7.456s 00:09:39.412 00:20:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.412 00:20:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.412 ************************************ 00:09:39.412 END TEST nvmf_rpc 00:09:39.412 ************************************ 00:09:39.412 00:20:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:39.412 00:20:52 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:39.412 00:20:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:39.412 00:20:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.412 00:20:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.412 ************************************ 00:09:39.412 START TEST nvmf_invalid 00:09:39.412 ************************************ 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:39.412 * Looking for test storage... 00:09:39.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.412 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.413 00:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.623 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:47.624 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:47.624 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:47.624 Found net devices under 0000:31:00.0: cvl_0_0 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:47.624 Found net devices under 0000:31:00.1: cvl_0_1 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:09:47.624 00:09:47.624 --- 10.0.0.2 ping statistics --- 00:09:47.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.624 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:09:47.624 00:09:47.624 --- 10.0.0.1 ping statistics --- 00:09:47.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.624 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=938661 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 938661 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 938661 ']' 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.624 00:21:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 [2024-07-16 00:21:01.049495] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:47.624 [2024-07-16 00:21:01.049546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.624 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.624 [2024-07-16 00:21:01.125386] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.624 [2024-07-16 00:21:01.194445] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.624 [2024-07-16 00:21:01.194482] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.624 [2024-07-16 00:21:01.194490] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.624 [2024-07-16 00:21:01.194496] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.624 [2024-07-16 00:21:01.194502] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.624 [2024-07-16 00:21:01.194661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.624 [2024-07-16 00:21:01.194777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.624 [2024-07-16 00:21:01.194933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.624 [2024-07-16 00:21:01.194934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.564 00:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.564 00:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:48.564 00:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.565 00:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.565 00:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:48.565 00:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.565 00:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:48.565 00:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8482 00:09:48.565 [2024-07-16 00:21:02.008250] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:48.565 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:48.565 { 00:09:48.565 "nqn": "nqn.2016-06.io.spdk:cnode8482", 00:09:48.565 "tgt_name": "foobar", 00:09:48.565 "method": "nvmf_create_subsystem", 00:09:48.565 "req_id": 1 00:09:48.565 } 00:09:48.565 Got JSON-RPC error response 00:09:48.565 response: 00:09:48.565 { 00:09:48.565 "code": -32603, 00:09:48.565 "message": "Unable to find target foobar" 00:09:48.565 }' 00:09:48.565 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:48.565 { 00:09:48.565 "nqn": "nqn.2016-06.io.spdk:cnode8482", 00:09:48.565 "tgt_name": "foobar", 00:09:48.565 "method": "nvmf_create_subsystem", 00:09:48.565 "req_id": 1 00:09:48.565 } 00:09:48.565 Got JSON-RPC error response 00:09:48.565 response: 00:09:48.565 { 00:09:48.565 "code": -32603, 00:09:48.565 "message": "Unable to find target foobar" 00:09:48.565 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:48.565 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:48.565 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16733 00:09:48.565 [2024-07-16 00:21:02.185008] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16733: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:48.826 { 00:09:48.826 "nqn": "nqn.2016-06.io.spdk:cnode16733", 00:09:48.826 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:48.826 "method": "nvmf_create_subsystem", 00:09:48.826 "req_id": 1 00:09:48.826 } 00:09:48.826 Got JSON-RPC error response 00:09:48.826 response: 00:09:48.826 { 00:09:48.826 "code": -32602, 00:09:48.826 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:48.826 }' 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:48.826 { 00:09:48.826 "nqn": "nqn.2016-06.io.spdk:cnode16733", 00:09:48.826 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:48.826 "method": "nvmf_create_subsystem", 00:09:48.826 "req_id": 1 00:09:48.826 } 00:09:48.826 Got JSON-RPC error response 00:09:48.826 response: 00:09:48.826 { 00:09:48.826 "code": -32602, 00:09:48.826 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:48.826 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16595 00:09:48.826 [2024-07-16 00:21:02.361608] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16595: invalid model number 'SPDK_Controller' 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:48.826 { 00:09:48.826 "nqn": "nqn.2016-06.io.spdk:cnode16595", 00:09:48.826 "model_number": "SPDK_Controller\u001f", 00:09:48.826 "method": "nvmf_create_subsystem", 00:09:48.826 "req_id": 1 00:09:48.826 } 00:09:48.826 Got JSON-RPC error response 00:09:48.826 response: 00:09:48.826 { 00:09:48.826 "code": -32602, 00:09:48.826 "message": "Invalid MN SPDK_Controller\u001f" 00:09:48.826 }' 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:48.826 { 00:09:48.826 "nqn": "nqn.2016-06.io.spdk:cnode16595", 00:09:48.826 "model_number": "SPDK_Controller\u001f", 00:09:48.826 "method": "nvmf_create_subsystem", 00:09:48.826 "req_id": 1 00:09:48.826 } 00:09:48.826 Got JSON-RPC error response 00:09:48.826 response: 00:09:48.826 { 00:09:48.826 "code": -32602, 00:09:48.826 "message": "Invalid MN SPDK_Controller\u001f" 00:09:48.826 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:48.826 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.827 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:49.088 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'utI~TW?f{!= CB3T}Kw4' 00:09:49.089 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'utI~TW?f{!= CB3T}Kw4' nqn.2016-06.io.spdk:cnode27937 00:09:49.089 [2024-07-16 00:21:02.694643] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27937: invalid serial number 'utI~TW?f{!= CB3T}Kw4' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:49.348 { 00:09:49.348 "nqn": "nqn.2016-06.io.spdk:cnode27937", 00:09:49.348 "serial_number": "utI~TW?f{!= C\u007fB3T}Kw4", 00:09:49.348 "method": "nvmf_create_subsystem", 00:09:49.348 "req_id": 1 00:09:49.348 } 00:09:49.348 Got JSON-RPC error response 00:09:49.348 response: 00:09:49.348 { 00:09:49.348 "code": -32602, 00:09:49.348 "message": "Invalid SN utI~TW?f{!= C\u007fB3T}Kw4" 00:09:49.348 }' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:49.348 { 00:09:49.348 "nqn": "nqn.2016-06.io.spdk:cnode27937", 00:09:49.348 "serial_number": "utI~TW?f{!= C\u007fB3T}Kw4", 00:09:49.348 "method": "nvmf_create_subsystem", 00:09:49.348 "req_id": 1 00:09:49.348 } 00:09:49.348 Got JSON-RPC error response 00:09:49.348 response: 00:09:49.348 { 00:09:49.348 "code": -32602, 00:09:49.348 "message": "Invalid SN utI~TW?f{!= C\u007fB3T}Kw4" 00:09:49.348 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.348 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.349 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '`_/`e([iP)T$Pp!e!~rD'\''4< )J}Me9jNa;`Xq\%A4' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '`_/`e([iP)T$Pp!e!~rD'\''4< )J}Me9jNa;`Xq\%A4' nqn.2016-06.io.spdk:cnode453 00:09:49.608 [2024-07-16 00:21:03.180370] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode453: invalid model number '`_/`e([iP)T$Pp!e!~rD'4< )J}Me9jNa;`Xq\%A4' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:49.608 { 00:09:49.608 "nqn": "nqn.2016-06.io.spdk:cnode453", 00:09:49.608 "model_number": "`_/`e([iP)T$Pp!e!~rD'\''4< )J}Me9jNa;`Xq\\%A4", 00:09:49.608 "method": "nvmf_create_subsystem", 00:09:49.608 "req_id": 1 00:09:49.608 } 00:09:49.608 Got JSON-RPC error response 00:09:49.608 response: 00:09:49.608 { 00:09:49.608 "code": -32602, 00:09:49.608 "message": "Invalid MN `_/`e([iP)T$Pp!e!~rD'\''4< )J}Me9jNa;`Xq\\%A4" 00:09:49.608 }' 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:49.608 { 00:09:49.608 "nqn": "nqn.2016-06.io.spdk:cnode453", 00:09:49.608 "model_number": "`_/`e([iP)T$Pp!e!~rD'4< )J}Me9jNa;`Xq\\%A4", 00:09:49.608 "method": "nvmf_create_subsystem", 00:09:49.608 "req_id": 1 00:09:49.608 } 00:09:49.608 Got JSON-RPC error response 00:09:49.608 response: 00:09:49.608 { 00:09:49.608 "code": -32602, 00:09:49.608 "message": "Invalid MN `_/`e([iP)T$Pp!e!~rD'4< )J}Me9jNa;`Xq\\%A4" 00:09:49.608 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:49.608 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:49.869 [2024-07-16 00:21:03.352969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.869 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:50.131 [2024-07-16 00:21:03.710092] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:50.131 { 00:09:50.131 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:50.131 "listen_address": { 00:09:50.131 "trtype": "tcp", 00:09:50.131 "traddr": "", 00:09:50.131 "trsvcid": "4421" 00:09:50.131 }, 00:09:50.131 "method": "nvmf_subsystem_remove_listener", 00:09:50.131 "req_id": 1 00:09:50.131 } 00:09:50.131 Got JSON-RPC error response 00:09:50.131 response: 00:09:50.131 { 00:09:50.131 "code": -32602, 00:09:50.131 "message": "Invalid parameters" 00:09:50.131 }' 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:50.131 { 00:09:50.131 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:50.131 "listen_address": { 00:09:50.131 "trtype": "tcp", 00:09:50.131 "traddr": "", 00:09:50.131 "trsvcid": "4421" 00:09:50.131 }, 00:09:50.131 "method": "nvmf_subsystem_remove_listener", 00:09:50.131 "req_id": 1 00:09:50.131 } 00:09:50.131 Got JSON-RPC error response 00:09:50.131 response: 00:09:50.131 { 00:09:50.131 "code": -32602, 00:09:50.131 "message": "Invalid parameters" 00:09:50.131 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:50.131 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3243 -i 0 00:09:50.390 [2024-07-16 00:21:03.882619] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3243: invalid cntlid range [0-65519] 00:09:50.390 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:50.390 { 00:09:50.390 "nqn": "nqn.2016-06.io.spdk:cnode3243", 00:09:50.390 "min_cntlid": 0, 00:09:50.390 "method": "nvmf_create_subsystem", 00:09:50.390 "req_id": 1 00:09:50.390 } 00:09:50.390 Got JSON-RPC error response 00:09:50.390 response: 00:09:50.390 { 00:09:50.390 "code": -32602, 00:09:50.390 "message": "Invalid cntlid range [0-65519]" 00:09:50.390 }' 00:09:50.390 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:50.390 { 00:09:50.390 "nqn": "nqn.2016-06.io.spdk:cnode3243", 00:09:50.390 "min_cntlid": 0, 00:09:50.390 "method": "nvmf_create_subsystem", 00:09:50.390 "req_id": 1 00:09:50.390 } 00:09:50.390 Got JSON-RPC error response 00:09:50.390 response: 00:09:50.390 { 00:09:50.390 "code": -32602, 00:09:50.390 "message": "Invalid cntlid range [0-65519]" 00:09:50.390 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.390 00:21:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13549 -i 65520 00:09:50.649 [2024-07-16 00:21:04.047160] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13549: invalid cntlid range [65520-65519] 00:09:50.650 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:50.650 { 00:09:50.650 "nqn": "nqn.2016-06.io.spdk:cnode13549", 00:09:50.650 "min_cntlid": 65520, 00:09:50.650 "method": "nvmf_create_subsystem", 00:09:50.650 "req_id": 1 00:09:50.650 } 00:09:50.650 Got JSON-RPC error response 00:09:50.650 response: 00:09:50.650 { 00:09:50.650 "code": -32602, 00:09:50.650 "message": "Invalid cntlid range [65520-65519]" 00:09:50.650 }' 00:09:50.650 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:50.650 { 00:09:50.650 "nqn": "nqn.2016-06.io.spdk:cnode13549", 00:09:50.650 "min_cntlid": 65520, 00:09:50.650 "method": "nvmf_create_subsystem", 00:09:50.650 "req_id": 1 00:09:50.650 } 00:09:50.650 Got JSON-RPC error response 00:09:50.650 response: 00:09:50.650 { 00:09:50.650 "code": -32602, 00:09:50.650 "message": "Invalid cntlid range [65520-65519]" 00:09:50.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.650 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12546 -I 0 00:09:50.650 [2024-07-16 00:21:04.219716] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12546: invalid cntlid range [1-0] 00:09:50.650 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:50.650 { 00:09:50.650 "nqn": "nqn.2016-06.io.spdk:cnode12546", 00:09:50.650 "max_cntlid": 0, 00:09:50.650 "method": "nvmf_create_subsystem", 00:09:50.650 "req_id": 1 00:09:50.650 } 00:09:50.650 Got JSON-RPC error response 00:09:50.650 response: 00:09:50.650 { 00:09:50.650 "code": -32602, 00:09:50.650 "message": "Invalid cntlid range [1-0]" 00:09:50.650 }' 00:09:50.650 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:50.650 { 00:09:50.650 "nqn": "nqn.2016-06.io.spdk:cnode12546", 00:09:50.650 "max_cntlid": 0, 00:09:50.650 "method": "nvmf_create_subsystem", 00:09:50.650 "req_id": 1 00:09:50.650 } 00:09:50.650 Got JSON-RPC error response 00:09:50.650 response: 00:09:50.650 { 00:09:50.650 "code": -32602, 00:09:50.650 "message": "Invalid cntlid range [1-0]" 00:09:50.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.650 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1668 -I 65520 00:09:50.910 [2024-07-16 00:21:04.392262] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1668: invalid cntlid range [1-65520] 00:09:50.910 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:50.910 { 00:09:50.910 "nqn": "nqn.2016-06.io.spdk:cnode1668", 00:09:50.910 "max_cntlid": 65520, 00:09:50.910 "method": "nvmf_create_subsystem", 00:09:50.910 "req_id": 1 00:09:50.910 } 00:09:50.910 Got JSON-RPC error response 00:09:50.910 response: 00:09:50.910 { 00:09:50.910 "code": -32602, 00:09:50.910 "message": "Invalid cntlid range [1-65520]" 00:09:50.910 }' 00:09:50.910 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:50.910 { 00:09:50.910 "nqn": "nqn.2016-06.io.spdk:cnode1668", 00:09:50.910 "max_cntlid": 65520, 00:09:50.910 "method": "nvmf_create_subsystem", 00:09:50.910 "req_id": 1 00:09:50.910 } 00:09:50.910 Got JSON-RPC error response 00:09:50.910 response: 00:09:50.910 { 00:09:50.910 "code": -32602, 00:09:50.910 "message": "Invalid cntlid range [1-65520]" 00:09:50.910 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.910 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13391 -i 6 -I 5 00:09:51.170 [2024-07-16 00:21:04.564826] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13391: invalid cntlid range [6-5] 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:51.170 { 00:09:51.170 "nqn": "nqn.2016-06.io.spdk:cnode13391", 00:09:51.170 "min_cntlid": 6, 00:09:51.170 "max_cntlid": 5, 00:09:51.170 "method": "nvmf_create_subsystem", 00:09:51.170 "req_id": 1 00:09:51.170 } 00:09:51.170 Got JSON-RPC error response 00:09:51.170 response: 00:09:51.170 { 00:09:51.170 "code": -32602, 00:09:51.170 "message": "Invalid cntlid range [6-5]" 00:09:51.170 }' 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:51.170 { 00:09:51.170 "nqn": "nqn.2016-06.io.spdk:cnode13391", 00:09:51.170 "min_cntlid": 6, 00:09:51.170 "max_cntlid": 5, 00:09:51.170 "method": "nvmf_create_subsystem", 00:09:51.170 "req_id": 1 00:09:51.170 } 00:09:51.170 Got JSON-RPC error response 00:09:51.170 response: 00:09:51.170 { 00:09:51.170 "code": -32602, 00:09:51.170 "message": "Invalid cntlid range [6-5]" 00:09:51.170 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:51.170 { 00:09:51.170 "name": "foobar", 00:09:51.170 "method": "nvmf_delete_target", 00:09:51.170 "req_id": 1 00:09:51.170 } 00:09:51.170 Got JSON-RPC error response 00:09:51.170 response: 00:09:51.170 { 00:09:51.170 "code": -32602, 00:09:51.170 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:51.170 }' 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:51.170 { 00:09:51.170 "name": "foobar", 00:09:51.170 "method": "nvmf_delete_target", 00:09:51.170 "req_id": 1 00:09:51.170 } 00:09:51.170 Got JSON-RPC error response 00:09:51.170 response: 00:09:51.170 { 00:09:51.170 "code": -32602, 00:09:51.170 "message": "The specified target doesn't exist, cannot delete it." 00:09:51.170 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.170 rmmod nvme_tcp 00:09:51.170 rmmod nvme_fabrics 00:09:51.170 rmmod nvme_keyring 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 938661 ']' 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 938661 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 938661 ']' 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 938661 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.170 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 938661 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 938661' 00:09:51.430 killing process with pid 938661 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 938661 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 938661 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.430 00:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.973 00:21:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.974 00:09:53.974 real 0m14.413s 00:09:53.974 user 0m19.526s 00:09:53.974 sys 0m6.978s 00:09:53.974 00:21:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.974 00:21:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:53.974 ************************************ 00:09:53.974 END TEST nvmf_invalid 00:09:53.974 ************************************ 00:09:53.974 00:21:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:53.974 00:21:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:53.974 00:21:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.974 00:21:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.974 00:21:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:53.974 ************************************ 00:09:53.974 START TEST nvmf_abort 00:09:53.974 ************************************ 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:53.974 * Looking for test storage... 00:09:53.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.974 00:21:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.117 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:02.118 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:02.118 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:02.118 Found net devices under 0000:31:00.0: cvl_0_0 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:02.118 Found net devices under 0000:31:00.1: cvl_0_1 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:10:02.118 00:10:02.118 --- 10.0.0.2 ping statistics --- 00:10:02.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.118 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:02.118 00:10:02.118 --- 10.0.0.1 ping statistics --- 00:10:02.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.118 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=944843 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 944843 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 944843 ']' 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.118 00:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.118 [2024-07-16 00:21:15.600923] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:02.118 [2024-07-16 00:21:15.600987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.118 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.118 [2024-07-16 00:21:15.699063] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.379 [2024-07-16 00:21:15.792878] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.379 [2024-07-16 00:21:15.792937] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.379 [2024-07-16 00:21:15.792946] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.379 [2024-07-16 00:21:15.792953] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.379 [2024-07-16 00:21:15.792959] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.379 [2024-07-16 00:21:15.793089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.379 [2024-07-16 00:21:15.793273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.379 [2024-07-16 00:21:15.793335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 [2024-07-16 00:21:16.434303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 Malloc0 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 Delay0 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 [2024-07-16 00:21:16.514754] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.951 00:21:16 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:02.951 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.212 [2024-07-16 00:21:16.633918] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:05.123 Initializing NVMe Controllers 00:10:05.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:05.123 controller IO queue size 128 less than required 00:10:05.123 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:05.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:05.123 Initialization complete. Launching workers. 00:10:05.123 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33120 00:10:05.123 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33181, failed to submit 62 00:10:05.123 success 33124, unsuccess 57, failed 0 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.123 rmmod nvme_tcp 00:10:05.123 rmmod nvme_fabrics 00:10:05.123 rmmod nvme_keyring 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 944843 ']' 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 944843 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 944843 ']' 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 944843 00:10:05.123 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944843 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944843' 00:10:05.382 killing process with pid 944843 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 944843 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 944843 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.382 00:21:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.928 00:21:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:07.928 00:10:07.928 real 0m13.900s 00:10:07.928 user 0m13.433s 00:10:07.928 sys 0m7.111s 00:10:07.928 00:21:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.928 00:21:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.928 ************************************ 00:10:07.928 END TEST nvmf_abort 00:10:07.928 ************************************ 00:10:07.928 00:21:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:07.928 00:21:21 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:07.928 00:21:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:07.928 00:21:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.928 00:21:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.928 ************************************ 00:10:07.928 START TEST nvmf_ns_hotplug_stress 00:10:07.928 ************************************ 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:07.928 * Looking for test storage... 00:10:07.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:07.928 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:07.929 00:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:16.064 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:16.064 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:16.064 Found net devices under 0000:31:00.0: cvl_0_0 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:16.064 Found net devices under 0000:31:00.1: cvl_0_1 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.064 00:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:10:16.064 00:10:16.064 --- 10.0.0.2 ping statistics --- 00:10:16.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.064 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:10:16.064 00:10:16.064 --- 10.0.0.1 ping statistics --- 00:10:16.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.064 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.064 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=950211 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 950211 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 950211 ']' 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.065 00:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.065 [2024-07-16 00:21:29.393702] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:16.065 [2024-07-16 00:21:29.393749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.065 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.065 [2024-07-16 00:21:29.484056] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.065 [2024-07-16 00:21:29.548959] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.065 [2024-07-16 00:21:29.548996] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.065 [2024-07-16 00:21:29.549004] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.065 [2024-07-16 00:21:29.549011] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.065 [2024-07-16 00:21:29.549016] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.065 [2024-07-16 00:21:29.549119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.065 [2024-07-16 00:21:29.549277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.065 [2024-07-16 00:21:29.549475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:16.635 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:16.896 [2024-07-16 00:21:30.389323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.896 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.155 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.155 [2024-07-16 00:21:30.734749] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.155 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.414 00:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:17.676 Malloc0 00:10:17.676 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:17.676 Delay0 00:10:17.676 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.937 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:18.198 NULL1 00:10:18.198 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:18.198 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:18.198 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=950589 00:10:18.198 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:18.198 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.198 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.458 00:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.718 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:18.718 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:18.718 true 00:10:18.718 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:18.718 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.978 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.239 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:19.239 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:19.239 true 00:10:19.239 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:19.239 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.500 00:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.500 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:19.500 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:19.759 true 00:10:19.759 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:19.759 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.018 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.018 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:20.018 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:20.278 true 00:10:20.278 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:20.278 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.541 00:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.541 00:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:20.541 00:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:20.801 true 00:10:20.801 00:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:20.801 00:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.741 Read completed with error (sct=0, sc=11) 00:10:21.741 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.741 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:21.741 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:22.001 true 00:10:22.001 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:22.001 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.261 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.261 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:22.261 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:22.520 true 00:10:22.520 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:22.520 00:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.520 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.781 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:22.781 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:23.040 true 00:10:23.040 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:23.041 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.041 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.300 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:23.300 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:23.560 true 00:10:23.560 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:23.560 00:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.560 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.821 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:23.821 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:23.821 true 00:10:23.821 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:23.821 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.081 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.341 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:24.341 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:24.341 true 00:10:24.341 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:24.341 00:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.601 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.861 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:24.861 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:24.861 true 00:10:24.861 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:24.861 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.121 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.381 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:25.381 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:25.381 true 00:10:25.381 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:25.381 00:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.642 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.902 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:25.902 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:25.902 true 00:10:25.902 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:25.902 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.163 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.163 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:26.163 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:26.423 true 00:10:26.423 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:26.423 00:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.682 00:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.682 00:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:26.682 00:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:26.943 true 00:10:26.943 00:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:26.943 00:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.907 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.907 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:27.907 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:28.167 true 00:10:28.167 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:28.167 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.167 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.428 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:28.428 00:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:28.712 true 00:10:28.712 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:28.712 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.712 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.001 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:29.001 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:29.001 true 00:10:29.001 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:29.001 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.261 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.521 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:29.521 00:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:29.521 true 00:10:29.521 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:29.521 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.781 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.042 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:30.042 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:30.042 true 00:10:30.042 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:30.042 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.302 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.564 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:30.564 00:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:30.564 true 00:10:30.564 00:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:30.564 00:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.825 00:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.825 00:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:30.825 00:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:31.086 true 00:10:31.086 00:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:31.086 00:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.051 00:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.312 00:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:32.312 00:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:32.312 true 00:10:32.312 00:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:32.312 00:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.573 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.833 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:32.833 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:32.833 true 00:10:32.833 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:32.833 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.093 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.093 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:33.093 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:33.353 true 00:10:33.353 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:33.353 00:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.614 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.614 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:33.614 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:33.875 true 00:10:33.875 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:33.875 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.135 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.135 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:34.135 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:34.396 true 00:10:34.396 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:34.396 00:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.657 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.657 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:34.657 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:34.917 true 00:10:34.917 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:34.917 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.177 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.177 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:35.177 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:35.437 true 00:10:35.437 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:35.437 00:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.437 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.698 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:35.698 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:35.958 true 00:10:35.958 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:35.958 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.958 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.219 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:36.219 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:36.481 true 00:10:36.481 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:36.481 00:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.423 00:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.423 00:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:37.423 00:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:37.423 true 00:10:37.684 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:37.684 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.684 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.944 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:37.944 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:37.944 true 00:10:38.205 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:38.205 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.205 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.466 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:38.466 00:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:38.466 true 00:10:38.466 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:38.466 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.726 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.988 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:38.988 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:38.988 true 00:10:38.988 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:38.988 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.249 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.509 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:39.509 00:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:39.509 true 00:10:39.509 00:21:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:39.509 00:21:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.457 00:21:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.718 00:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:40.718 00:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:40.718 true 00:10:40.718 00:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:40.718 00:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.659 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.921 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:41.921 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:41.921 true 00:10:41.921 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:41.921 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.182 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.443 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:42.443 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:42.443 true 00:10:42.443 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:42.443 00:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.704 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.704 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:42.704 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:42.965 true 00:10:42.965 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:42.965 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.226 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.226 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:43.226 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:43.487 true 00:10:43.487 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:43.487 00:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.747 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.747 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:43.747 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:44.009 true 00:10:44.009 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:44.009 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.270 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.270 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:44.270 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:44.530 true 00:10:44.530 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:44.530 00:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.789 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.789 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:44.789 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:45.049 true 00:10:45.049 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:45.049 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.049 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.308 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:45.308 00:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:45.568 true 00:10:45.568 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:45.568 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.568 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.828 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:45.828 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:46.087 true 00:10:46.087 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:46.087 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.087 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.347 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:46.347 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:46.347 true 00:10:46.605 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:46.605 00:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.605 00:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.863 00:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:46.863 00:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:46.863 true 00:10:46.863 00:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:46.863 00:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.801 00:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.060 00:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:48.060 00:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:48.060 true 00:10:48.060 00:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:48.060 00:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.319 00:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.579 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:48.579 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:48.579 true 00:10:48.579 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:48.579 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.579 Initializing NVMe Controllers 00:10:48.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:48.579 Controller IO queue size 128, less than required. 00:10:48.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:48.579 Controller IO queue size 128, less than required. 00:10:48.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:48.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:48.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:48.579 Initialization complete. Launching workers. 00:10:48.579 ======================================================== 00:10:48.579 Latency(us) 00:10:48.579 Device Information : IOPS MiB/s Average min max 00:10:48.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 397.04 0.19 72937.88 2522.93 1133182.98 00:10:48.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7098.61 3.47 17971.48 1784.97 546257.54 00:10:48.579 ======================================================== 00:10:48.579 Total : 7495.65 3.66 20883.02 1784.97 1133182.98 00:10:48.579 00:10:48.839 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.099 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:49.099 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:49.099 true 00:10:49.099 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 950589 00:10:49.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (950589) - No such process 00:10:49.099 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 950589 00:10:49.099 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.359 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.359 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:49.359 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:49.359 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:49.359 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.359 00:22:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:49.620 null0 00:10:49.620 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.620 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.620 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:49.880 null1 00:10:49.880 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.880 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.880 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:49.880 null2 00:10:49.880 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:49.880 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:49.880 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:50.139 null3 00:10:50.139 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.139 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.139 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:50.399 null4 00:10:50.399 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.399 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.399 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:50.399 null5 00:10:50.399 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.399 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.400 00:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:50.659 null6 00:10:50.659 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.659 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.659 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:50.659 null7 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 957123 957126 957127 957129 957132 957134 957137 957140 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.921 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.180 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:51.440 00:22:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.440 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:51.700 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.701 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:51.701 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:51.701 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:51.961 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:52.222 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.482 00:22:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.482 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:52.743 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:53.005 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.269 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:53.529 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.529 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:53.529 00:22:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.529 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.789 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.790 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.050 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.051 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.311 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.572 00:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.572 rmmod nvme_tcp 00:10:54.572 rmmod nvme_fabrics 00:10:54.572 rmmod nvme_keyring 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 950211 ']' 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 950211 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 950211 ']' 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 950211 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950211 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950211' 00:10:54.572 killing process with pid 950211 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 950211 00:10:54.572 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 950211 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.833 00:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.766 00:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.766 00:10:56.766 real 0m49.242s 00:10:56.766 user 3m14.215s 00:10:56.766 sys 0m16.035s 00:10:56.766 00:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.766 00:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.766 ************************************ 00:10:56.766 END TEST nvmf_ns_hotplug_stress 00:10:56.766 ************************************ 00:10:56.766 00:22:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:56.766 00:22:10 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:56.766 00:22:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:56.766 00:22:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.766 00:22:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:56.766 ************************************ 00:10:56.766 START TEST nvmf_connect_stress 00:10:56.766 ************************************ 00:10:56.766 00:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:57.027 * Looking for test storage... 00:10:57.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.027 00:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:05.230 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:05.230 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:05.230 Found net devices under 0000:31:00.0: cvl_0_0 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:05.230 Found net devices under 0000:31:00.1: cvl_0_1 00:11:05.230 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:11:05.231 00:11:05.231 --- 10.0.0.2 ping statistics --- 00:11:05.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.231 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:11:05.231 00:11:05.231 --- 10.0.0.1 ping statistics --- 00:11:05.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.231 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=962897 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 962897 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 962897 ']' 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.231 00:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.231 [2024-07-16 00:22:18.781466] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:05.231 [2024-07-16 00:22:18.781529] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.231 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.492 [2024-07-16 00:22:18.878347] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.492 [2024-07-16 00:22:18.972944] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.492 [2024-07-16 00:22:18.973005] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.492 [2024-07-16 00:22:18.973014] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.492 [2024-07-16 00:22:18.973021] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.492 [2024-07-16 00:22:18.973027] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.492 [2024-07-16 00:22:18.973174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.492 [2024-07-16 00:22:18.973344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.492 [2024-07-16 00:22:18.973495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.063 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.063 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:06.063 00:22:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.064 [2024-07-16 00:22:19.615133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.064 [2024-07-16 00:22:19.651396] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.064 NULL1 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=962979 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.064 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.325 00:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.585 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.585 00:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:06.585 00:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.585 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.585 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.846 00:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:06.846 00:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.846 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.846 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.416 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.416 00:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:07.416 00:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.416 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.416 00:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.676 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.676 00:22:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:07.676 00:22:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.676 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.676 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.936 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.936 00:22:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:07.936 00:22:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.936 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.936 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.196 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.196 00:22:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:08.196 00:22:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.196 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.196 00:22:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.456 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.456 00:22:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:08.456 00:22:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.456 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.456 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.027 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.027 00:22:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:09.027 00:22:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.027 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.027 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.288 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.288 00:22:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:09.288 00:22:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.288 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.288 00:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.549 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.549 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:09.549 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.549 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.549 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.812 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.812 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:09.812 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.812 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.812 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.091 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.091 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:10.091 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.091 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.091 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.661 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.661 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:10.661 00:22:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.661 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.661 00:22:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.922 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.922 00:22:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:10.922 00:22:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.922 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.922 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.183 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.183 00:22:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:11.183 00:22:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.183 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.183 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.444 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.444 00:22:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:11.444 00:22:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.444 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.444 00:22:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.704 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.704 00:22:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:11.704 00:22:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.704 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.704 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.273 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.274 00:22:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:12.274 00:22:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.274 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.274 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.533 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.534 00:22:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:12.534 00:22:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.534 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.534 00:22:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.793 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.793 00:22:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:12.793 00:22:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.793 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.793 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.054 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.054 00:22:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:13.054 00:22:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.054 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.054 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.315 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.315 00:22:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:13.315 00:22:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.315 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.315 00:22:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.886 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.886 00:22:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:13.886 00:22:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.886 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.886 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.146 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.146 00:22:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:14.146 00:22:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.146 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.146 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.407 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.407 00:22:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:14.407 00:22:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.407 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.407 00:22:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.668 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.669 00:22:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:14.669 00:22:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.669 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.669 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.930 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.930 00:22:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:14.930 00:22:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.930 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.930 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.500 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.500 00:22:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:15.500 00:22:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.500 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.500 00:22:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.761 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.761 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:15.761 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.761 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.761 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.022 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.022 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:16.022 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.022 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.022 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 962979 00:11:16.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (962979) - No such process 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 962979 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.283 rmmod nvme_tcp 00:11:16.283 rmmod nvme_fabrics 00:11:16.283 rmmod nvme_keyring 00:11:16.283 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 962897 ']' 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 962897 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 962897 ']' 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 962897 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 962897 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 962897' 00:11:16.545 killing process with pid 962897 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 962897 00:11:16.545 00:22:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 962897 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.545 00:22:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.088 00:22:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.088 00:11:19.088 real 0m21.765s 00:11:19.088 user 0m42.333s 00:11:19.088 sys 0m9.323s 00:11:19.088 00:22:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.088 00:22:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.088 ************************************ 00:11:19.088 END TEST nvmf_connect_stress 00:11:19.088 ************************************ 00:11:19.088 00:22:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:19.088 00:22:32 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:19.088 00:22:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:19.088 00:22:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.088 00:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.088 ************************************ 00:11:19.088 START TEST nvmf_fused_ordering 00:11:19.088 ************************************ 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:19.088 * Looking for test storage... 00:11:19.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.088 00:22:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:27.222 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:27.222 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:27.222 Found net devices under 0000:31:00.0: cvl_0_0 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:27.222 Found net devices under 0000:31:00.1: cvl_0_1 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.222 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:11:27.223 00:11:27.223 --- 10.0.0.2 ping statistics --- 00:11:27.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.223 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:27.223 00:11:27.223 --- 10.0.0.1 ping statistics --- 00:11:27.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.223 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=969731 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 969731 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 969731 ']' 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.223 00:22:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.223 [2024-07-16 00:22:40.648452] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:27.223 [2024-07-16 00:22:40.648515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.223 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.223 [2024-07-16 00:22:40.747969] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.223 [2024-07-16 00:22:40.842037] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.223 [2024-07-16 00:22:40.842097] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.223 [2024-07-16 00:22:40.842105] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.223 [2024-07-16 00:22:40.842113] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.223 [2024-07-16 00:22:40.842119] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.223 [2024-07-16 00:22:40.842146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 [2024-07-16 00:22:41.485777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 [2024-07-16 00:22:41.510043] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 NULL1 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.164 00:22:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:28.164 [2024-07-16 00:22:41.579073] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:28.164 [2024-07-16 00:22:41.579119] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970027 ] 00:11:28.164 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.425 Attached to nqn.2016-06.io.spdk:cnode1 00:11:28.425 Namespace ID: 1 size: 1GB 00:11:28.425 fused_ordering(0) 00:11:28.425 fused_ordering(1) 00:11:28.425 fused_ordering(2) 00:11:28.425 fused_ordering(3) 00:11:28.425 fused_ordering(4) 00:11:28.425 fused_ordering(5) 00:11:28.425 fused_ordering(6) 00:11:28.425 fused_ordering(7) 00:11:28.425 fused_ordering(8) 00:11:28.425 fused_ordering(9) 00:11:28.425 fused_ordering(10) 00:11:28.425 fused_ordering(11) 00:11:28.425 fused_ordering(12) 00:11:28.425 fused_ordering(13) 00:11:28.425 fused_ordering(14) 00:11:28.425 fused_ordering(15) 00:11:28.425 fused_ordering(16) 00:11:28.425 fused_ordering(17) 00:11:28.425 fused_ordering(18) 00:11:28.425 fused_ordering(19) 00:11:28.425 fused_ordering(20) 00:11:28.425 fused_ordering(21) 00:11:28.425 fused_ordering(22) 00:11:28.425 fused_ordering(23) 00:11:28.425 fused_ordering(24) 00:11:28.425 fused_ordering(25) 00:11:28.425 fused_ordering(26) 00:11:28.425 fused_ordering(27) 00:11:28.425 fused_ordering(28) 00:11:28.425 fused_ordering(29) 00:11:28.425 fused_ordering(30) 00:11:28.425 fused_ordering(31) 00:11:28.425 fused_ordering(32) 00:11:28.425 fused_ordering(33) 00:11:28.425 fused_ordering(34) 00:11:28.425 fused_ordering(35) 00:11:28.425 fused_ordering(36) 00:11:28.425 fused_ordering(37) 00:11:28.425 fused_ordering(38) 00:11:28.425 fused_ordering(39) 00:11:28.425 fused_ordering(40) 00:11:28.425 fused_ordering(41) 00:11:28.425 fused_ordering(42) 00:11:28.425 fused_ordering(43) 00:11:28.425 fused_ordering(44) 00:11:28.425 fused_ordering(45) 00:11:28.425 fused_ordering(46) 00:11:28.425 fused_ordering(47) 00:11:28.425 fused_ordering(48) 00:11:28.425 fused_ordering(49) 00:11:28.425 fused_ordering(50) 00:11:28.425 fused_ordering(51) 00:11:28.425 fused_ordering(52) 00:11:28.425 fused_ordering(53) 00:11:28.425 fused_ordering(54) 00:11:28.425 fused_ordering(55) 00:11:28.425 fused_ordering(56) 00:11:28.425 fused_ordering(57) 00:11:28.425 fused_ordering(58) 00:11:28.425 fused_ordering(59) 00:11:28.425 fused_ordering(60) 00:11:28.425 fused_ordering(61) 00:11:28.425 fused_ordering(62) 00:11:28.425 fused_ordering(63) 00:11:28.425 fused_ordering(64) 00:11:28.425 fused_ordering(65) 00:11:28.425 fused_ordering(66) 00:11:28.425 fused_ordering(67) 00:11:28.425 fused_ordering(68) 00:11:28.425 fused_ordering(69) 00:11:28.425 fused_ordering(70) 00:11:28.425 fused_ordering(71) 00:11:28.425 fused_ordering(72) 00:11:28.425 fused_ordering(73) 00:11:28.425 fused_ordering(74) 00:11:28.425 fused_ordering(75) 00:11:28.425 fused_ordering(76) 00:11:28.425 fused_ordering(77) 00:11:28.425 fused_ordering(78) 00:11:28.425 fused_ordering(79) 00:11:28.425 fused_ordering(80) 00:11:28.425 fused_ordering(81) 00:11:28.425 fused_ordering(82) 00:11:28.425 fused_ordering(83) 00:11:28.425 fused_ordering(84) 00:11:28.425 fused_ordering(85) 00:11:28.425 fused_ordering(86) 00:11:28.425 fused_ordering(87) 00:11:28.425 fused_ordering(88) 00:11:28.425 fused_ordering(89) 00:11:28.425 fused_ordering(90) 00:11:28.425 fused_ordering(91) 00:11:28.425 fused_ordering(92) 00:11:28.425 fused_ordering(93) 00:11:28.425 fused_ordering(94) 00:11:28.425 fused_ordering(95) 00:11:28.425 fused_ordering(96) 00:11:28.425 fused_ordering(97) 00:11:28.425 fused_ordering(98) 00:11:28.425 fused_ordering(99) 00:11:28.425 fused_ordering(100) 00:11:28.425 fused_ordering(101) 00:11:28.425 fused_ordering(102) 00:11:28.425 fused_ordering(103) 00:11:28.425 fused_ordering(104) 00:11:28.425 fused_ordering(105) 00:11:28.425 fused_ordering(106) 00:11:28.425 fused_ordering(107) 00:11:28.425 fused_ordering(108) 00:11:28.425 fused_ordering(109) 00:11:28.425 fused_ordering(110) 00:11:28.425 fused_ordering(111) 00:11:28.425 fused_ordering(112) 00:11:28.425 fused_ordering(113) 00:11:28.425 fused_ordering(114) 00:11:28.425 fused_ordering(115) 00:11:28.425 fused_ordering(116) 00:11:28.425 fused_ordering(117) 00:11:28.425 fused_ordering(118) 00:11:28.425 fused_ordering(119) 00:11:28.425 fused_ordering(120) 00:11:28.425 fused_ordering(121) 00:11:28.425 fused_ordering(122) 00:11:28.425 fused_ordering(123) 00:11:28.425 fused_ordering(124) 00:11:28.425 fused_ordering(125) 00:11:28.425 fused_ordering(126) 00:11:28.425 fused_ordering(127) 00:11:28.425 fused_ordering(128) 00:11:28.425 fused_ordering(129) 00:11:28.425 fused_ordering(130) 00:11:28.425 fused_ordering(131) 00:11:28.425 fused_ordering(132) 00:11:28.425 fused_ordering(133) 00:11:28.425 fused_ordering(134) 00:11:28.425 fused_ordering(135) 00:11:28.425 fused_ordering(136) 00:11:28.425 fused_ordering(137) 00:11:28.425 fused_ordering(138) 00:11:28.425 fused_ordering(139) 00:11:28.425 fused_ordering(140) 00:11:28.425 fused_ordering(141) 00:11:28.425 fused_ordering(142) 00:11:28.425 fused_ordering(143) 00:11:28.425 fused_ordering(144) 00:11:28.425 fused_ordering(145) 00:11:28.425 fused_ordering(146) 00:11:28.425 fused_ordering(147) 00:11:28.425 fused_ordering(148) 00:11:28.425 fused_ordering(149) 00:11:28.425 fused_ordering(150) 00:11:28.425 fused_ordering(151) 00:11:28.425 fused_ordering(152) 00:11:28.425 fused_ordering(153) 00:11:28.425 fused_ordering(154) 00:11:28.425 fused_ordering(155) 00:11:28.425 fused_ordering(156) 00:11:28.425 fused_ordering(157) 00:11:28.425 fused_ordering(158) 00:11:28.425 fused_ordering(159) 00:11:28.425 fused_ordering(160) 00:11:28.425 fused_ordering(161) 00:11:28.425 fused_ordering(162) 00:11:28.425 fused_ordering(163) 00:11:28.425 fused_ordering(164) 00:11:28.425 fused_ordering(165) 00:11:28.425 fused_ordering(166) 00:11:28.425 fused_ordering(167) 00:11:28.425 fused_ordering(168) 00:11:28.425 fused_ordering(169) 00:11:28.425 fused_ordering(170) 00:11:28.425 fused_ordering(171) 00:11:28.425 fused_ordering(172) 00:11:28.425 fused_ordering(173) 00:11:28.425 fused_ordering(174) 00:11:28.425 fused_ordering(175) 00:11:28.425 fused_ordering(176) 00:11:28.425 fused_ordering(177) 00:11:28.425 fused_ordering(178) 00:11:28.425 fused_ordering(179) 00:11:28.425 fused_ordering(180) 00:11:28.425 fused_ordering(181) 00:11:28.425 fused_ordering(182) 00:11:28.425 fused_ordering(183) 00:11:28.425 fused_ordering(184) 00:11:28.425 fused_ordering(185) 00:11:28.425 fused_ordering(186) 00:11:28.425 fused_ordering(187) 00:11:28.425 fused_ordering(188) 00:11:28.425 fused_ordering(189) 00:11:28.425 fused_ordering(190) 00:11:28.425 fused_ordering(191) 00:11:28.425 fused_ordering(192) 00:11:28.425 fused_ordering(193) 00:11:28.425 fused_ordering(194) 00:11:28.425 fused_ordering(195) 00:11:28.425 fused_ordering(196) 00:11:28.425 fused_ordering(197) 00:11:28.425 fused_ordering(198) 00:11:28.425 fused_ordering(199) 00:11:28.425 fused_ordering(200) 00:11:28.425 fused_ordering(201) 00:11:28.425 fused_ordering(202) 00:11:28.425 fused_ordering(203) 00:11:28.425 fused_ordering(204) 00:11:28.425 fused_ordering(205) 00:11:28.998 fused_ordering(206) 00:11:28.998 fused_ordering(207) 00:11:28.998 fused_ordering(208) 00:11:28.998 fused_ordering(209) 00:11:28.998 fused_ordering(210) 00:11:28.998 fused_ordering(211) 00:11:28.998 fused_ordering(212) 00:11:28.998 fused_ordering(213) 00:11:28.998 fused_ordering(214) 00:11:28.998 fused_ordering(215) 00:11:28.998 fused_ordering(216) 00:11:28.998 fused_ordering(217) 00:11:28.998 fused_ordering(218) 00:11:28.998 fused_ordering(219) 00:11:28.998 fused_ordering(220) 00:11:28.998 fused_ordering(221) 00:11:28.998 fused_ordering(222) 00:11:28.998 fused_ordering(223) 00:11:28.998 fused_ordering(224) 00:11:28.998 fused_ordering(225) 00:11:28.998 fused_ordering(226) 00:11:28.998 fused_ordering(227) 00:11:28.998 fused_ordering(228) 00:11:28.998 fused_ordering(229) 00:11:28.998 fused_ordering(230) 00:11:28.998 fused_ordering(231) 00:11:28.998 fused_ordering(232) 00:11:28.998 fused_ordering(233) 00:11:28.998 fused_ordering(234) 00:11:28.998 fused_ordering(235) 00:11:28.998 fused_ordering(236) 00:11:28.999 fused_ordering(237) 00:11:28.999 fused_ordering(238) 00:11:28.999 fused_ordering(239) 00:11:28.999 fused_ordering(240) 00:11:28.999 fused_ordering(241) 00:11:28.999 fused_ordering(242) 00:11:28.999 fused_ordering(243) 00:11:28.999 fused_ordering(244) 00:11:28.999 fused_ordering(245) 00:11:28.999 fused_ordering(246) 00:11:28.999 fused_ordering(247) 00:11:28.999 fused_ordering(248) 00:11:28.999 fused_ordering(249) 00:11:28.999 fused_ordering(250) 00:11:28.999 fused_ordering(251) 00:11:28.999 fused_ordering(252) 00:11:28.999 fused_ordering(253) 00:11:28.999 fused_ordering(254) 00:11:28.999 fused_ordering(255) 00:11:28.999 fused_ordering(256) 00:11:28.999 fused_ordering(257) 00:11:28.999 fused_ordering(258) 00:11:28.999 fused_ordering(259) 00:11:28.999 fused_ordering(260) 00:11:28.999 fused_ordering(261) 00:11:28.999 fused_ordering(262) 00:11:28.999 fused_ordering(263) 00:11:28.999 fused_ordering(264) 00:11:28.999 fused_ordering(265) 00:11:28.999 fused_ordering(266) 00:11:28.999 fused_ordering(267) 00:11:28.999 fused_ordering(268) 00:11:28.999 fused_ordering(269) 00:11:28.999 fused_ordering(270) 00:11:28.999 fused_ordering(271) 00:11:28.999 fused_ordering(272) 00:11:28.999 fused_ordering(273) 00:11:28.999 fused_ordering(274) 00:11:28.999 fused_ordering(275) 00:11:28.999 fused_ordering(276) 00:11:28.999 fused_ordering(277) 00:11:28.999 fused_ordering(278) 00:11:28.999 fused_ordering(279) 00:11:28.999 fused_ordering(280) 00:11:28.999 fused_ordering(281) 00:11:28.999 fused_ordering(282) 00:11:28.999 fused_ordering(283) 00:11:28.999 fused_ordering(284) 00:11:28.999 fused_ordering(285) 00:11:28.999 fused_ordering(286) 00:11:28.999 fused_ordering(287) 00:11:28.999 fused_ordering(288) 00:11:28.999 fused_ordering(289) 00:11:28.999 fused_ordering(290) 00:11:28.999 fused_ordering(291) 00:11:28.999 fused_ordering(292) 00:11:28.999 fused_ordering(293) 00:11:28.999 fused_ordering(294) 00:11:28.999 fused_ordering(295) 00:11:28.999 fused_ordering(296) 00:11:28.999 fused_ordering(297) 00:11:28.999 fused_ordering(298) 00:11:28.999 fused_ordering(299) 00:11:28.999 fused_ordering(300) 00:11:28.999 fused_ordering(301) 00:11:28.999 fused_ordering(302) 00:11:28.999 fused_ordering(303) 00:11:28.999 fused_ordering(304) 00:11:28.999 fused_ordering(305) 00:11:28.999 fused_ordering(306) 00:11:28.999 fused_ordering(307) 00:11:28.999 fused_ordering(308) 00:11:28.999 fused_ordering(309) 00:11:28.999 fused_ordering(310) 00:11:28.999 fused_ordering(311) 00:11:28.999 fused_ordering(312) 00:11:28.999 fused_ordering(313) 00:11:28.999 fused_ordering(314) 00:11:28.999 fused_ordering(315) 00:11:28.999 fused_ordering(316) 00:11:28.999 fused_ordering(317) 00:11:28.999 fused_ordering(318) 00:11:28.999 fused_ordering(319) 00:11:28.999 fused_ordering(320) 00:11:28.999 fused_ordering(321) 00:11:28.999 fused_ordering(322) 00:11:28.999 fused_ordering(323) 00:11:28.999 fused_ordering(324) 00:11:28.999 fused_ordering(325) 00:11:28.999 fused_ordering(326) 00:11:28.999 fused_ordering(327) 00:11:28.999 fused_ordering(328) 00:11:28.999 fused_ordering(329) 00:11:28.999 fused_ordering(330) 00:11:28.999 fused_ordering(331) 00:11:28.999 fused_ordering(332) 00:11:28.999 fused_ordering(333) 00:11:28.999 fused_ordering(334) 00:11:28.999 fused_ordering(335) 00:11:28.999 fused_ordering(336) 00:11:28.999 fused_ordering(337) 00:11:28.999 fused_ordering(338) 00:11:28.999 fused_ordering(339) 00:11:28.999 fused_ordering(340) 00:11:28.999 fused_ordering(341) 00:11:28.999 fused_ordering(342) 00:11:28.999 fused_ordering(343) 00:11:28.999 fused_ordering(344) 00:11:28.999 fused_ordering(345) 00:11:28.999 fused_ordering(346) 00:11:28.999 fused_ordering(347) 00:11:28.999 fused_ordering(348) 00:11:28.999 fused_ordering(349) 00:11:28.999 fused_ordering(350) 00:11:28.999 fused_ordering(351) 00:11:28.999 fused_ordering(352) 00:11:28.999 fused_ordering(353) 00:11:28.999 fused_ordering(354) 00:11:28.999 fused_ordering(355) 00:11:28.999 fused_ordering(356) 00:11:28.999 fused_ordering(357) 00:11:28.999 fused_ordering(358) 00:11:28.999 fused_ordering(359) 00:11:28.999 fused_ordering(360) 00:11:28.999 fused_ordering(361) 00:11:28.999 fused_ordering(362) 00:11:28.999 fused_ordering(363) 00:11:28.999 fused_ordering(364) 00:11:28.999 fused_ordering(365) 00:11:28.999 fused_ordering(366) 00:11:28.999 fused_ordering(367) 00:11:28.999 fused_ordering(368) 00:11:28.999 fused_ordering(369) 00:11:28.999 fused_ordering(370) 00:11:28.999 fused_ordering(371) 00:11:28.999 fused_ordering(372) 00:11:28.999 fused_ordering(373) 00:11:28.999 fused_ordering(374) 00:11:28.999 fused_ordering(375) 00:11:28.999 fused_ordering(376) 00:11:28.999 fused_ordering(377) 00:11:28.999 fused_ordering(378) 00:11:28.999 fused_ordering(379) 00:11:28.999 fused_ordering(380) 00:11:28.999 fused_ordering(381) 00:11:28.999 fused_ordering(382) 00:11:28.999 fused_ordering(383) 00:11:28.999 fused_ordering(384) 00:11:28.999 fused_ordering(385) 00:11:28.999 fused_ordering(386) 00:11:28.999 fused_ordering(387) 00:11:28.999 fused_ordering(388) 00:11:28.999 fused_ordering(389) 00:11:28.999 fused_ordering(390) 00:11:28.999 fused_ordering(391) 00:11:28.999 fused_ordering(392) 00:11:28.999 fused_ordering(393) 00:11:28.999 fused_ordering(394) 00:11:28.999 fused_ordering(395) 00:11:28.999 fused_ordering(396) 00:11:28.999 fused_ordering(397) 00:11:28.999 fused_ordering(398) 00:11:28.999 fused_ordering(399) 00:11:28.999 fused_ordering(400) 00:11:28.999 fused_ordering(401) 00:11:28.999 fused_ordering(402) 00:11:28.999 fused_ordering(403) 00:11:28.999 fused_ordering(404) 00:11:28.999 fused_ordering(405) 00:11:28.999 fused_ordering(406) 00:11:28.999 fused_ordering(407) 00:11:28.999 fused_ordering(408) 00:11:28.999 fused_ordering(409) 00:11:28.999 fused_ordering(410) 00:11:29.259 fused_ordering(411) 00:11:29.259 fused_ordering(412) 00:11:29.259 fused_ordering(413) 00:11:29.259 fused_ordering(414) 00:11:29.259 fused_ordering(415) 00:11:29.259 fused_ordering(416) 00:11:29.259 fused_ordering(417) 00:11:29.259 fused_ordering(418) 00:11:29.259 fused_ordering(419) 00:11:29.259 fused_ordering(420) 00:11:29.259 fused_ordering(421) 00:11:29.259 fused_ordering(422) 00:11:29.259 fused_ordering(423) 00:11:29.259 fused_ordering(424) 00:11:29.259 fused_ordering(425) 00:11:29.259 fused_ordering(426) 00:11:29.259 fused_ordering(427) 00:11:29.259 fused_ordering(428) 00:11:29.259 fused_ordering(429) 00:11:29.259 fused_ordering(430) 00:11:29.259 fused_ordering(431) 00:11:29.259 fused_ordering(432) 00:11:29.259 fused_ordering(433) 00:11:29.259 fused_ordering(434) 00:11:29.259 fused_ordering(435) 00:11:29.259 fused_ordering(436) 00:11:29.259 fused_ordering(437) 00:11:29.259 fused_ordering(438) 00:11:29.259 fused_ordering(439) 00:11:29.259 fused_ordering(440) 00:11:29.259 fused_ordering(441) 00:11:29.259 fused_ordering(442) 00:11:29.259 fused_ordering(443) 00:11:29.259 fused_ordering(444) 00:11:29.259 fused_ordering(445) 00:11:29.259 fused_ordering(446) 00:11:29.259 fused_ordering(447) 00:11:29.259 fused_ordering(448) 00:11:29.259 fused_ordering(449) 00:11:29.259 fused_ordering(450) 00:11:29.259 fused_ordering(451) 00:11:29.259 fused_ordering(452) 00:11:29.259 fused_ordering(453) 00:11:29.259 fused_ordering(454) 00:11:29.259 fused_ordering(455) 00:11:29.259 fused_ordering(456) 00:11:29.259 fused_ordering(457) 00:11:29.259 fused_ordering(458) 00:11:29.259 fused_ordering(459) 00:11:29.259 fused_ordering(460) 00:11:29.259 fused_ordering(461) 00:11:29.259 fused_ordering(462) 00:11:29.260 fused_ordering(463) 00:11:29.260 fused_ordering(464) 00:11:29.260 fused_ordering(465) 00:11:29.260 fused_ordering(466) 00:11:29.260 fused_ordering(467) 00:11:29.260 fused_ordering(468) 00:11:29.260 fused_ordering(469) 00:11:29.260 fused_ordering(470) 00:11:29.260 fused_ordering(471) 00:11:29.260 fused_ordering(472) 00:11:29.260 fused_ordering(473) 00:11:29.260 fused_ordering(474) 00:11:29.260 fused_ordering(475) 00:11:29.260 fused_ordering(476) 00:11:29.260 fused_ordering(477) 00:11:29.260 fused_ordering(478) 00:11:29.260 fused_ordering(479) 00:11:29.260 fused_ordering(480) 00:11:29.260 fused_ordering(481) 00:11:29.260 fused_ordering(482) 00:11:29.260 fused_ordering(483) 00:11:29.260 fused_ordering(484) 00:11:29.260 fused_ordering(485) 00:11:29.260 fused_ordering(486) 00:11:29.260 fused_ordering(487) 00:11:29.260 fused_ordering(488) 00:11:29.260 fused_ordering(489) 00:11:29.260 fused_ordering(490) 00:11:29.260 fused_ordering(491) 00:11:29.260 fused_ordering(492) 00:11:29.260 fused_ordering(493) 00:11:29.260 fused_ordering(494) 00:11:29.260 fused_ordering(495) 00:11:29.260 fused_ordering(496) 00:11:29.260 fused_ordering(497) 00:11:29.260 fused_ordering(498) 00:11:29.260 fused_ordering(499) 00:11:29.260 fused_ordering(500) 00:11:29.260 fused_ordering(501) 00:11:29.260 fused_ordering(502) 00:11:29.260 fused_ordering(503) 00:11:29.260 fused_ordering(504) 00:11:29.260 fused_ordering(505) 00:11:29.260 fused_ordering(506) 00:11:29.260 fused_ordering(507) 00:11:29.260 fused_ordering(508) 00:11:29.260 fused_ordering(509) 00:11:29.260 fused_ordering(510) 00:11:29.260 fused_ordering(511) 00:11:29.260 fused_ordering(512) 00:11:29.260 fused_ordering(513) 00:11:29.260 fused_ordering(514) 00:11:29.260 fused_ordering(515) 00:11:29.260 fused_ordering(516) 00:11:29.260 fused_ordering(517) 00:11:29.260 fused_ordering(518) 00:11:29.260 fused_ordering(519) 00:11:29.260 fused_ordering(520) 00:11:29.260 fused_ordering(521) 00:11:29.260 fused_ordering(522) 00:11:29.260 fused_ordering(523) 00:11:29.260 fused_ordering(524) 00:11:29.260 fused_ordering(525) 00:11:29.260 fused_ordering(526) 00:11:29.260 fused_ordering(527) 00:11:29.260 fused_ordering(528) 00:11:29.260 fused_ordering(529) 00:11:29.260 fused_ordering(530) 00:11:29.260 fused_ordering(531) 00:11:29.260 fused_ordering(532) 00:11:29.260 fused_ordering(533) 00:11:29.260 fused_ordering(534) 00:11:29.260 fused_ordering(535) 00:11:29.260 fused_ordering(536) 00:11:29.260 fused_ordering(537) 00:11:29.260 fused_ordering(538) 00:11:29.260 fused_ordering(539) 00:11:29.260 fused_ordering(540) 00:11:29.260 fused_ordering(541) 00:11:29.260 fused_ordering(542) 00:11:29.260 fused_ordering(543) 00:11:29.260 fused_ordering(544) 00:11:29.260 fused_ordering(545) 00:11:29.260 fused_ordering(546) 00:11:29.260 fused_ordering(547) 00:11:29.260 fused_ordering(548) 00:11:29.260 fused_ordering(549) 00:11:29.260 fused_ordering(550) 00:11:29.260 fused_ordering(551) 00:11:29.260 fused_ordering(552) 00:11:29.260 fused_ordering(553) 00:11:29.260 fused_ordering(554) 00:11:29.260 fused_ordering(555) 00:11:29.260 fused_ordering(556) 00:11:29.260 fused_ordering(557) 00:11:29.260 fused_ordering(558) 00:11:29.260 fused_ordering(559) 00:11:29.260 fused_ordering(560) 00:11:29.260 fused_ordering(561) 00:11:29.260 fused_ordering(562) 00:11:29.260 fused_ordering(563) 00:11:29.260 fused_ordering(564) 00:11:29.260 fused_ordering(565) 00:11:29.260 fused_ordering(566) 00:11:29.260 fused_ordering(567) 00:11:29.260 fused_ordering(568) 00:11:29.260 fused_ordering(569) 00:11:29.260 fused_ordering(570) 00:11:29.260 fused_ordering(571) 00:11:29.260 fused_ordering(572) 00:11:29.260 fused_ordering(573) 00:11:29.260 fused_ordering(574) 00:11:29.260 fused_ordering(575) 00:11:29.260 fused_ordering(576) 00:11:29.260 fused_ordering(577) 00:11:29.260 fused_ordering(578) 00:11:29.260 fused_ordering(579) 00:11:29.260 fused_ordering(580) 00:11:29.260 fused_ordering(581) 00:11:29.260 fused_ordering(582) 00:11:29.260 fused_ordering(583) 00:11:29.260 fused_ordering(584) 00:11:29.260 fused_ordering(585) 00:11:29.260 fused_ordering(586) 00:11:29.260 fused_ordering(587) 00:11:29.260 fused_ordering(588) 00:11:29.260 fused_ordering(589) 00:11:29.260 fused_ordering(590) 00:11:29.260 fused_ordering(591) 00:11:29.260 fused_ordering(592) 00:11:29.260 fused_ordering(593) 00:11:29.260 fused_ordering(594) 00:11:29.260 fused_ordering(595) 00:11:29.260 fused_ordering(596) 00:11:29.260 fused_ordering(597) 00:11:29.260 fused_ordering(598) 00:11:29.260 fused_ordering(599) 00:11:29.260 fused_ordering(600) 00:11:29.260 fused_ordering(601) 00:11:29.260 fused_ordering(602) 00:11:29.260 fused_ordering(603) 00:11:29.260 fused_ordering(604) 00:11:29.260 fused_ordering(605) 00:11:29.260 fused_ordering(606) 00:11:29.260 fused_ordering(607) 00:11:29.260 fused_ordering(608) 00:11:29.260 fused_ordering(609) 00:11:29.260 fused_ordering(610) 00:11:29.260 fused_ordering(611) 00:11:29.260 fused_ordering(612) 00:11:29.260 fused_ordering(613) 00:11:29.260 fused_ordering(614) 00:11:29.260 fused_ordering(615) 00:11:29.838 fused_ordering(616) 00:11:29.838 fused_ordering(617) 00:11:29.838 fused_ordering(618) 00:11:29.838 fused_ordering(619) 00:11:29.838 fused_ordering(620) 00:11:29.838 fused_ordering(621) 00:11:29.838 fused_ordering(622) 00:11:29.838 fused_ordering(623) 00:11:29.838 fused_ordering(624) 00:11:29.838 fused_ordering(625) 00:11:29.838 fused_ordering(626) 00:11:29.838 fused_ordering(627) 00:11:29.838 fused_ordering(628) 00:11:29.838 fused_ordering(629) 00:11:29.838 fused_ordering(630) 00:11:29.838 fused_ordering(631) 00:11:29.838 fused_ordering(632) 00:11:29.838 fused_ordering(633) 00:11:29.838 fused_ordering(634) 00:11:29.838 fused_ordering(635) 00:11:29.838 fused_ordering(636) 00:11:29.838 fused_ordering(637) 00:11:29.838 fused_ordering(638) 00:11:29.838 fused_ordering(639) 00:11:29.838 fused_ordering(640) 00:11:29.838 fused_ordering(641) 00:11:29.838 fused_ordering(642) 00:11:29.838 fused_ordering(643) 00:11:29.838 fused_ordering(644) 00:11:29.838 fused_ordering(645) 00:11:29.838 fused_ordering(646) 00:11:29.838 fused_ordering(647) 00:11:29.838 fused_ordering(648) 00:11:29.838 fused_ordering(649) 00:11:29.838 fused_ordering(650) 00:11:29.838 fused_ordering(651) 00:11:29.838 fused_ordering(652) 00:11:29.838 fused_ordering(653) 00:11:29.838 fused_ordering(654) 00:11:29.838 fused_ordering(655) 00:11:29.838 fused_ordering(656) 00:11:29.838 fused_ordering(657) 00:11:29.838 fused_ordering(658) 00:11:29.838 fused_ordering(659) 00:11:29.838 fused_ordering(660) 00:11:29.838 fused_ordering(661) 00:11:29.838 fused_ordering(662) 00:11:29.838 fused_ordering(663) 00:11:29.838 fused_ordering(664) 00:11:29.838 fused_ordering(665) 00:11:29.838 fused_ordering(666) 00:11:29.838 fused_ordering(667) 00:11:29.838 fused_ordering(668) 00:11:29.838 fused_ordering(669) 00:11:29.838 fused_ordering(670) 00:11:29.838 fused_ordering(671) 00:11:29.838 fused_ordering(672) 00:11:29.838 fused_ordering(673) 00:11:29.838 fused_ordering(674) 00:11:29.838 fused_ordering(675) 00:11:29.838 fused_ordering(676) 00:11:29.838 fused_ordering(677) 00:11:29.838 fused_ordering(678) 00:11:29.838 fused_ordering(679) 00:11:29.838 fused_ordering(680) 00:11:29.838 fused_ordering(681) 00:11:29.838 fused_ordering(682) 00:11:29.838 fused_ordering(683) 00:11:29.838 fused_ordering(684) 00:11:29.838 fused_ordering(685) 00:11:29.838 fused_ordering(686) 00:11:29.838 fused_ordering(687) 00:11:29.838 fused_ordering(688) 00:11:29.838 fused_ordering(689) 00:11:29.838 fused_ordering(690) 00:11:29.838 fused_ordering(691) 00:11:29.838 fused_ordering(692) 00:11:29.838 fused_ordering(693) 00:11:29.838 fused_ordering(694) 00:11:29.838 fused_ordering(695) 00:11:29.838 fused_ordering(696) 00:11:29.838 fused_ordering(697) 00:11:29.838 fused_ordering(698) 00:11:29.838 fused_ordering(699) 00:11:29.838 fused_ordering(700) 00:11:29.838 fused_ordering(701) 00:11:29.838 fused_ordering(702) 00:11:29.838 fused_ordering(703) 00:11:29.838 fused_ordering(704) 00:11:29.838 fused_ordering(705) 00:11:29.838 fused_ordering(706) 00:11:29.838 fused_ordering(707) 00:11:29.838 fused_ordering(708) 00:11:29.838 fused_ordering(709) 00:11:29.838 fused_ordering(710) 00:11:29.838 fused_ordering(711) 00:11:29.838 fused_ordering(712) 00:11:29.838 fused_ordering(713) 00:11:29.838 fused_ordering(714) 00:11:29.838 fused_ordering(715) 00:11:29.838 fused_ordering(716) 00:11:29.838 fused_ordering(717) 00:11:29.838 fused_ordering(718) 00:11:29.838 fused_ordering(719) 00:11:29.838 fused_ordering(720) 00:11:29.838 fused_ordering(721) 00:11:29.838 fused_ordering(722) 00:11:29.838 fused_ordering(723) 00:11:29.838 fused_ordering(724) 00:11:29.839 fused_ordering(725) 00:11:29.839 fused_ordering(726) 00:11:29.839 fused_ordering(727) 00:11:29.839 fused_ordering(728) 00:11:29.839 fused_ordering(729) 00:11:29.839 fused_ordering(730) 00:11:29.839 fused_ordering(731) 00:11:29.839 fused_ordering(732) 00:11:29.839 fused_ordering(733) 00:11:29.839 fused_ordering(734) 00:11:29.839 fused_ordering(735) 00:11:29.839 fused_ordering(736) 00:11:29.839 fused_ordering(737) 00:11:29.839 fused_ordering(738) 00:11:29.839 fused_ordering(739) 00:11:29.839 fused_ordering(740) 00:11:29.839 fused_ordering(741) 00:11:29.839 fused_ordering(742) 00:11:29.839 fused_ordering(743) 00:11:29.839 fused_ordering(744) 00:11:29.839 fused_ordering(745) 00:11:29.839 fused_ordering(746) 00:11:29.839 fused_ordering(747) 00:11:29.839 fused_ordering(748) 00:11:29.839 fused_ordering(749) 00:11:29.839 fused_ordering(750) 00:11:29.839 fused_ordering(751) 00:11:29.839 fused_ordering(752) 00:11:29.839 fused_ordering(753) 00:11:29.839 fused_ordering(754) 00:11:29.839 fused_ordering(755) 00:11:29.839 fused_ordering(756) 00:11:29.839 fused_ordering(757) 00:11:29.839 fused_ordering(758) 00:11:29.839 fused_ordering(759) 00:11:29.839 fused_ordering(760) 00:11:29.839 fused_ordering(761) 00:11:29.839 fused_ordering(762) 00:11:29.839 fused_ordering(763) 00:11:29.839 fused_ordering(764) 00:11:29.839 fused_ordering(765) 00:11:29.839 fused_ordering(766) 00:11:29.839 fused_ordering(767) 00:11:29.839 fused_ordering(768) 00:11:29.839 fused_ordering(769) 00:11:29.839 fused_ordering(770) 00:11:29.839 fused_ordering(771) 00:11:29.839 fused_ordering(772) 00:11:29.839 fused_ordering(773) 00:11:29.839 fused_ordering(774) 00:11:29.839 fused_ordering(775) 00:11:29.839 fused_ordering(776) 00:11:29.839 fused_ordering(777) 00:11:29.839 fused_ordering(778) 00:11:29.839 fused_ordering(779) 00:11:29.839 fused_ordering(780) 00:11:29.839 fused_ordering(781) 00:11:29.839 fused_ordering(782) 00:11:29.839 fused_ordering(783) 00:11:29.839 fused_ordering(784) 00:11:29.839 fused_ordering(785) 00:11:29.839 fused_ordering(786) 00:11:29.839 fused_ordering(787) 00:11:29.839 fused_ordering(788) 00:11:29.839 fused_ordering(789) 00:11:29.839 fused_ordering(790) 00:11:29.839 fused_ordering(791) 00:11:29.839 fused_ordering(792) 00:11:29.839 fused_ordering(793) 00:11:29.839 fused_ordering(794) 00:11:29.839 fused_ordering(795) 00:11:29.839 fused_ordering(796) 00:11:29.839 fused_ordering(797) 00:11:29.839 fused_ordering(798) 00:11:29.839 fused_ordering(799) 00:11:29.839 fused_ordering(800) 00:11:29.839 fused_ordering(801) 00:11:29.839 fused_ordering(802) 00:11:29.839 fused_ordering(803) 00:11:29.839 fused_ordering(804) 00:11:29.839 fused_ordering(805) 00:11:29.839 fused_ordering(806) 00:11:29.839 fused_ordering(807) 00:11:29.839 fused_ordering(808) 00:11:29.839 fused_ordering(809) 00:11:29.839 fused_ordering(810) 00:11:29.839 fused_ordering(811) 00:11:29.839 fused_ordering(812) 00:11:29.839 fused_ordering(813) 00:11:29.839 fused_ordering(814) 00:11:29.839 fused_ordering(815) 00:11:29.839 fused_ordering(816) 00:11:29.839 fused_ordering(817) 00:11:29.839 fused_ordering(818) 00:11:29.839 fused_ordering(819) 00:11:29.839 fused_ordering(820) 00:11:30.414 fused_ordering(821) 00:11:30.414 fused_ordering(822) 00:11:30.414 fused_ordering(823) 00:11:30.414 fused_ordering(824) 00:11:30.414 fused_ordering(825) 00:11:30.414 fused_ordering(826) 00:11:30.414 fused_ordering(827) 00:11:30.414 fused_ordering(828) 00:11:30.414 fused_ordering(829) 00:11:30.414 fused_ordering(830) 00:11:30.414 fused_ordering(831) 00:11:30.414 fused_ordering(832) 00:11:30.414 fused_ordering(833) 00:11:30.414 fused_ordering(834) 00:11:30.414 fused_ordering(835) 00:11:30.414 fused_ordering(836) 00:11:30.414 fused_ordering(837) 00:11:30.414 fused_ordering(838) 00:11:30.414 fused_ordering(839) 00:11:30.414 fused_ordering(840) 00:11:30.414 fused_ordering(841) 00:11:30.414 fused_ordering(842) 00:11:30.414 fused_ordering(843) 00:11:30.414 fused_ordering(844) 00:11:30.414 fused_ordering(845) 00:11:30.414 fused_ordering(846) 00:11:30.414 fused_ordering(847) 00:11:30.414 fused_ordering(848) 00:11:30.414 fused_ordering(849) 00:11:30.414 fused_ordering(850) 00:11:30.414 fused_ordering(851) 00:11:30.414 fused_ordering(852) 00:11:30.414 fused_ordering(853) 00:11:30.414 fused_ordering(854) 00:11:30.414 fused_ordering(855) 00:11:30.414 fused_ordering(856) 00:11:30.414 fused_ordering(857) 00:11:30.414 fused_ordering(858) 00:11:30.414 fused_ordering(859) 00:11:30.414 fused_ordering(860) 00:11:30.414 fused_ordering(861) 00:11:30.414 fused_ordering(862) 00:11:30.414 fused_ordering(863) 00:11:30.414 fused_ordering(864) 00:11:30.414 fused_ordering(865) 00:11:30.414 fused_ordering(866) 00:11:30.414 fused_ordering(867) 00:11:30.414 fused_ordering(868) 00:11:30.414 fused_ordering(869) 00:11:30.414 fused_ordering(870) 00:11:30.414 fused_ordering(871) 00:11:30.414 fused_ordering(872) 00:11:30.414 fused_ordering(873) 00:11:30.414 fused_ordering(874) 00:11:30.414 fused_ordering(875) 00:11:30.414 fused_ordering(876) 00:11:30.414 fused_ordering(877) 00:11:30.414 fused_ordering(878) 00:11:30.414 fused_ordering(879) 00:11:30.414 fused_ordering(880) 00:11:30.414 fused_ordering(881) 00:11:30.414 fused_ordering(882) 00:11:30.414 fused_ordering(883) 00:11:30.414 fused_ordering(884) 00:11:30.414 fused_ordering(885) 00:11:30.414 fused_ordering(886) 00:11:30.414 fused_ordering(887) 00:11:30.414 fused_ordering(888) 00:11:30.414 fused_ordering(889) 00:11:30.414 fused_ordering(890) 00:11:30.414 fused_ordering(891) 00:11:30.414 fused_ordering(892) 00:11:30.414 fused_ordering(893) 00:11:30.414 fused_ordering(894) 00:11:30.414 fused_ordering(895) 00:11:30.414 fused_ordering(896) 00:11:30.414 fused_ordering(897) 00:11:30.414 fused_ordering(898) 00:11:30.414 fused_ordering(899) 00:11:30.414 fused_ordering(900) 00:11:30.414 fused_ordering(901) 00:11:30.414 fused_ordering(902) 00:11:30.414 fused_ordering(903) 00:11:30.414 fused_ordering(904) 00:11:30.414 fused_ordering(905) 00:11:30.414 fused_ordering(906) 00:11:30.414 fused_ordering(907) 00:11:30.414 fused_ordering(908) 00:11:30.414 fused_ordering(909) 00:11:30.414 fused_ordering(910) 00:11:30.414 fused_ordering(911) 00:11:30.414 fused_ordering(912) 00:11:30.414 fused_ordering(913) 00:11:30.414 fused_ordering(914) 00:11:30.414 fused_ordering(915) 00:11:30.414 fused_ordering(916) 00:11:30.414 fused_ordering(917) 00:11:30.414 fused_ordering(918) 00:11:30.414 fused_ordering(919) 00:11:30.414 fused_ordering(920) 00:11:30.414 fused_ordering(921) 00:11:30.414 fused_ordering(922) 00:11:30.414 fused_ordering(923) 00:11:30.414 fused_ordering(924) 00:11:30.414 fused_ordering(925) 00:11:30.414 fused_ordering(926) 00:11:30.414 fused_ordering(927) 00:11:30.414 fused_ordering(928) 00:11:30.414 fused_ordering(929) 00:11:30.414 fused_ordering(930) 00:11:30.414 fused_ordering(931) 00:11:30.414 fused_ordering(932) 00:11:30.414 fused_ordering(933) 00:11:30.414 fused_ordering(934) 00:11:30.414 fused_ordering(935) 00:11:30.414 fused_ordering(936) 00:11:30.414 fused_ordering(937) 00:11:30.414 fused_ordering(938) 00:11:30.414 fused_ordering(939) 00:11:30.414 fused_ordering(940) 00:11:30.414 fused_ordering(941) 00:11:30.414 fused_ordering(942) 00:11:30.414 fused_ordering(943) 00:11:30.414 fused_ordering(944) 00:11:30.414 fused_ordering(945) 00:11:30.414 fused_ordering(946) 00:11:30.414 fused_ordering(947) 00:11:30.414 fused_ordering(948) 00:11:30.414 fused_ordering(949) 00:11:30.414 fused_ordering(950) 00:11:30.414 fused_ordering(951) 00:11:30.414 fused_ordering(952) 00:11:30.414 fused_ordering(953) 00:11:30.415 fused_ordering(954) 00:11:30.415 fused_ordering(955) 00:11:30.415 fused_ordering(956) 00:11:30.415 fused_ordering(957) 00:11:30.415 fused_ordering(958) 00:11:30.415 fused_ordering(959) 00:11:30.415 fused_ordering(960) 00:11:30.415 fused_ordering(961) 00:11:30.415 fused_ordering(962) 00:11:30.415 fused_ordering(963) 00:11:30.415 fused_ordering(964) 00:11:30.415 fused_ordering(965) 00:11:30.415 fused_ordering(966) 00:11:30.415 fused_ordering(967) 00:11:30.415 fused_ordering(968) 00:11:30.415 fused_ordering(969) 00:11:30.415 fused_ordering(970) 00:11:30.415 fused_ordering(971) 00:11:30.415 fused_ordering(972) 00:11:30.415 fused_ordering(973) 00:11:30.415 fused_ordering(974) 00:11:30.415 fused_ordering(975) 00:11:30.415 fused_ordering(976) 00:11:30.415 fused_ordering(977) 00:11:30.415 fused_ordering(978) 00:11:30.415 fused_ordering(979) 00:11:30.415 fused_ordering(980) 00:11:30.415 fused_ordering(981) 00:11:30.415 fused_ordering(982) 00:11:30.415 fused_ordering(983) 00:11:30.415 fused_ordering(984) 00:11:30.415 fused_ordering(985) 00:11:30.415 fused_ordering(986) 00:11:30.415 fused_ordering(987) 00:11:30.415 fused_ordering(988) 00:11:30.415 fused_ordering(989) 00:11:30.415 fused_ordering(990) 00:11:30.415 fused_ordering(991) 00:11:30.415 fused_ordering(992) 00:11:30.415 fused_ordering(993) 00:11:30.415 fused_ordering(994) 00:11:30.415 fused_ordering(995) 00:11:30.415 fused_ordering(996) 00:11:30.415 fused_ordering(997) 00:11:30.415 fused_ordering(998) 00:11:30.415 fused_ordering(999) 00:11:30.415 fused_ordering(1000) 00:11:30.415 fused_ordering(1001) 00:11:30.415 fused_ordering(1002) 00:11:30.415 fused_ordering(1003) 00:11:30.415 fused_ordering(1004) 00:11:30.415 fused_ordering(1005) 00:11:30.415 fused_ordering(1006) 00:11:30.415 fused_ordering(1007) 00:11:30.415 fused_ordering(1008) 00:11:30.415 fused_ordering(1009) 00:11:30.415 fused_ordering(1010) 00:11:30.415 fused_ordering(1011) 00:11:30.415 fused_ordering(1012) 00:11:30.415 fused_ordering(1013) 00:11:30.415 fused_ordering(1014) 00:11:30.415 fused_ordering(1015) 00:11:30.415 fused_ordering(1016) 00:11:30.415 fused_ordering(1017) 00:11:30.415 fused_ordering(1018) 00:11:30.415 fused_ordering(1019) 00:11:30.415 fused_ordering(1020) 00:11:30.415 fused_ordering(1021) 00:11:30.415 fused_ordering(1022) 00:11:30.415 fused_ordering(1023) 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.415 00:22:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.415 rmmod nvme_tcp 00:11:30.415 rmmod nvme_fabrics 00:11:30.415 rmmod nvme_keyring 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 969731 ']' 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 969731 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 969731 ']' 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 969731 00:11:30.415 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 969731 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 969731' 00:11:30.676 killing process with pid 969731 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 969731 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 969731 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.676 00:22:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.224 00:22:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:33.224 00:11:33.224 real 0m14.037s 00:11:33.224 user 0m7.230s 00:11:33.224 sys 0m7.603s 00:11:33.224 00:22:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.224 00:22:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:33.224 ************************************ 00:11:33.224 END TEST nvmf_fused_ordering 00:11:33.224 ************************************ 00:11:33.224 00:22:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:33.224 00:22:46 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:33.224 00:22:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.224 00:22:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.224 00:22:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.224 ************************************ 00:11:33.224 START TEST nvmf_delete_subsystem 00:11:33.224 ************************************ 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:33.224 * Looking for test storage... 00:11:33.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.224 00:22:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.368 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.368 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.368 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.368 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.368 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.368 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.368 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:41.369 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:41.369 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:41.369 Found net devices under 0000:31:00.0: cvl_0_0 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:41.369 Found net devices under 0000:31:00.1: cvl_0_1 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:11:41.369 00:11:41.369 --- 10.0.0.2 ping statistics --- 00:11:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.369 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:11:41.369 00:11:41.369 --- 10.0.0.1 ping statistics --- 00:11:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.369 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=975073 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 975073 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 975073 ']' 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.369 00:22:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.369 [2024-07-16 00:22:54.620850] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:41.369 [2024-07-16 00:22:54.620912] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.369 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.369 [2024-07-16 00:22:54.701595] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.369 [2024-07-16 00:22:54.775936] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.369 [2024-07-16 00:22:54.775978] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.369 [2024-07-16 00:22:54.775986] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.369 [2024-07-16 00:22:54.775993] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.369 [2024-07-16 00:22:54.775998] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.369 [2024-07-16 00:22:54.776138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.369 [2024-07-16 00:22:54.776140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 [2024-07-16 00:22:55.440072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 [2024-07-16 00:22:55.456227] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 NULL1 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 Delay0 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=975395 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:41.938 00:22:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:41.938 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.938 [2024-07-16 00:22:55.540945] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:44.483 00:22:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.483 00:22:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.483 00:22:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 starting I/O failed: -6 00:11:44.483 [2024-07-16 00:22:57.707504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2310 is same with the state(5) to be set 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Write completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.483 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 starting I/O failed: -6 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 [2024-07-16 00:22:57.710836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b9c000c00 is same with the state(5) to be set 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Write completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:44.484 Read completed with error (sct=0, sc=8) 00:11:45.157 [2024-07-16 00:22:58.682471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b16e0 is same with the state(5) to be set 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 [2024-07-16 00:22:58.710866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d26c0 is same with the state(5) to be set 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 [2024-07-16 00:22:58.711439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2ea0 is same with the state(5) to be set 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 [2024-07-16 00:22:58.712983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b9c00d020 is same with the state(5) to be set 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Write completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.157 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Write completed with error (sct=0, sc=8) 00:11:45.158 Write completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Write completed with error (sct=0, sc=8) 00:11:45.158 Write completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Write completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 Read completed with error (sct=0, sc=8) 00:11:45.158 [2024-07-16 00:22:58.713123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b9c00d800 is same with the state(5) to be set 00:11:45.158 Initializing NVMe Controllers 00:11:45.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:45.158 Controller IO queue size 128, less than required. 00:11:45.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:45.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:45.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:45.158 Initialization complete. Launching workers. 00:11:45.158 ======================================================== 00:11:45.158 Latency(us) 00:11:45.158 Device Information : IOPS MiB/s Average min max 00:11:45.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.68 0.08 895299.71 237.61 1007907.07 00:11:45.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.20 0.08 959147.43 294.32 2002879.21 00:11:45.158 ======================================================== 00:11:45.158 Total : 333.88 0.16 926700.23 237.61 2002879.21 00:11:45.158 00:11:45.158 [2024-07-16 00:22:58.713649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b16e0 (9): Bad file descriptor 00:11:45.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:45.158 00:22:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.158 00:22:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:45.158 00:22:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 975395 00:11:45.158 00:22:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 975395 00:11:45.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (975395) - No such process 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 975395 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 975395 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 975395 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.729 [2024-07-16 00:22:59.243430] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=976078 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:45.729 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:45.729 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.729 [2024-07-16 00:22:59.312488] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:46.301 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.301 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:46.301 00:22:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.872 00:23:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.872 00:23:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:46.872 00:23:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.442 00:23:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.442 00:23:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:47.442 00:23:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.702 00:23:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.702 00:23:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:47.702 00:23:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.276 00:23:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.276 00:23:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:48.276 00:23:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.848 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.848 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:48.848 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.848 Initializing NVMe Controllers 00:11:48.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.848 Controller IO queue size 128, less than required. 00:11:48.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:48.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:48.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:48.848 Initialization complete. Launching workers. 00:11:48.848 ======================================================== 00:11:48.848 Latency(us) 00:11:48.848 Device Information : IOPS MiB/s Average min max 00:11:48.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002074.23 1000253.18 1005820.39 00:11:48.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003031.60 1000257.62 1010130.56 00:11:48.848 ======================================================== 00:11:48.848 Total : 256.00 0.12 1002552.91 1000253.18 1010130.56 00:11:48.848 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 976078 00:11:49.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (976078) - No such process 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 976078 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.418 rmmod nvme_tcp 00:11:49.418 rmmod nvme_fabrics 00:11:49.418 rmmod nvme_keyring 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 975073 ']' 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 975073 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 975073 ']' 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 975073 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 975073 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 975073' 00:11:49.418 killing process with pid 975073 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 975073 00:11:49.418 00:23:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 975073 00:11:49.418 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.418 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.419 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.419 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.419 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.679 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.679 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.679 00:23:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.590 00:23:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.590 00:11:51.590 real 0m18.764s 00:11:51.590 user 0m30.837s 00:11:51.590 sys 0m6.943s 00:11:51.590 00:23:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.590 00:23:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.590 ************************************ 00:11:51.590 END TEST nvmf_delete_subsystem 00:11:51.590 ************************************ 00:11:51.590 00:23:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:51.590 00:23:05 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:51.590 00:23:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:51.590 00:23:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.590 00:23:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:51.590 ************************************ 00:11:51.590 START TEST nvmf_ns_masking 00:11:51.590 ************************************ 00:11:51.590 00:23:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:51.852 * Looking for test storage... 00:11:51.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=468a3ff5-4561-4d5e-98ee-08d7dfd9e21d 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1879c507-0aef-4fd4-be7f-1a3443b1bfb7 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=61d66333-9a62-4ae7-a67b-9a8d826f6d6b 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.852 00:23:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:59.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:59.994 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:59.994 Found net devices under 0000:31:00.0: cvl_0_0 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:59.994 Found net devices under 0000:31:00.1: cvl_0_1 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:11:59.994 00:11:59.994 --- 10.0.0.2 ping statistics --- 00:11:59.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.994 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:11:59.994 00:11:59.994 --- 10.0.0.1 ping statistics --- 00:11:59.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.994 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=981461 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 981461 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 981461 ']' 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.994 00:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:00.254 [2024-07-16 00:23:13.642314] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:00.254 [2024-07-16 00:23:13.642380] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.254 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.254 [2024-07-16 00:23:13.722369] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.254 [2024-07-16 00:23:13.795603] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.254 [2024-07-16 00:23:13.795644] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.254 [2024-07-16 00:23:13.795652] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.254 [2024-07-16 00:23:13.795658] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.254 [2024-07-16 00:23:13.795663] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.254 [2024-07-16 00:23:13.795682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.963 00:23:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.963 00:23:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:00.963 00:23:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.963 00:23:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.963 00:23:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:00.963 00:23:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.963 00:23:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.963 [2024-07-16 00:23:14.582717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.222 00:23:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:01.222 00:23:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:01.222 00:23:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:01.222 Malloc1 00:12:01.222 00:23:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:01.483 Malloc2 00:12:01.483 00:23:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.744 00:23:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:01.744 00:23:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.004 [2024-07-16 00:23:15.426324] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.004 00:23:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:02.004 00:23:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61d66333-9a62-4ae7-a67b-9a8d826f6d6b -a 10.0.0.2 -s 4420 -i 4 00:12:02.004 00:23:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.004 00:23:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.004 00:23:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.004 00:23:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.004 00:23:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.548 [ 0]:0x1 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55bf8d2dcf764c3792d4c66c1e09e267 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55bf8d2dcf764c3792d4c66c1e09e267 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.548 [ 0]:0x1 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55bf8d2dcf764c3792d4c66c1e09e267 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55bf8d2dcf764c3792d4c66c1e09e267 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.548 [ 1]:0x2 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f110fddccdd40ee8b8407925a9c234e 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f110fddccdd40ee8b8407925a9c234e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:04.548 00:23:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.548 00:23:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.810 00:23:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:05.069 00:23:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:05.069 00:23:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61d66333-9a62-4ae7-a67b-9a8d826f6d6b -a 10.0.0.2 -s 4420 -i 4 00:12:05.069 00:23:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:05.069 00:23:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.070 00:23:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.070 00:23:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:05.070 00:23:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:05.070 00:23:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:06.983 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.244 [ 0]:0x2 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f110fddccdd40ee8b8407925a9c234e 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f110fddccdd40ee8b8407925a9c234e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.244 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.505 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:07.505 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.505 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.505 [ 0]:0x1 00:12:07.505 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.505 00:23:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55bf8d2dcf764c3792d4c66c1e09e267 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55bf8d2dcf764c3792d4c66c1e09e267 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.505 [ 1]:0x2 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f110fddccdd40ee8b8407925a9c234e 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f110fddccdd40ee8b8407925a9c234e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.505 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.766 [ 0]:0x2 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f110fddccdd40ee8b8407925a9c234e 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f110fddccdd40ee8b8407925a9c234e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:07.766 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.027 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.027 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:08.027 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61d66333-9a62-4ae7-a67b-9a8d826f6d6b -a 10.0.0.2 -s 4420 -i 4 00:12:08.287 00:23:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:08.287 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.287 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.287 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:08.288 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:08.288 00:23:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:10.201 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:10.461 [ 0]:0x1 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55bf8d2dcf764c3792d4c66c1e09e267 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55bf8d2dcf764c3792d4c66c1e09e267 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:10.461 [ 1]:0x2 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f110fddccdd40ee8b8407925a9c234e 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f110fddccdd40ee8b8407925a9c234e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.461 00:23:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:10.720 [ 0]:0x2 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f110fddccdd40ee8b8407925a9c234e 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f110fddccdd40ee8b8407925a9c234e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:10.720 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.980 [2024-07-16 00:23:24.371923] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:10.980 request: 00:12:10.980 { 00:12:10.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.980 "nsid": 2, 00:12:10.980 "host": "nqn.2016-06.io.spdk:host1", 00:12:10.980 "method": "nvmf_ns_remove_host", 00:12:10.980 "req_id": 1 00:12:10.980 } 00:12:10.980 Got JSON-RPC error response 00:12:10.980 response: 00:12:10.980 { 00:12:10.980 "code": -32602, 00:12:10.980 "message": "Invalid parameters" 00:12:10.980 } 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:10.980 [ 0]:0x2 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f110fddccdd40ee8b8407925a9c234e 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f110fddccdd40ee8b8407925a9c234e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=983938 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 983938 /var/tmp/host.sock 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 983938 ']' 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.980 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:10.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:10.981 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.981 00:23:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:11.239 [2024-07-16 00:23:24.613336] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:11.239 [2024-07-16 00:23:24.613386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983938 ] 00:12:11.239 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.239 [2024-07-16 00:23:24.697684] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.239 [2024-07-16 00:23:24.761425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.814 00:23:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.814 00:23:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:11.814 00:23:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.075 00:23:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:12.075 00:23:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 468a3ff5-4561-4d5e-98ee-08d7dfd9e21d 00:12:12.075 00:23:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:12.075 00:23:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 468A3FF545614D5E98EE08D7DFD9E21D -i 00:12:12.334 00:23:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1879c507-0aef-4fd4-be7f-1a3443b1bfb7 00:12:12.334 00:23:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:12.334 00:23:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1879C5070AEF4FD4BE7F1A3443B1BFB7 -i 00:12:12.594 00:23:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:12.594 00:23:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:12.854 00:23:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:12.854 00:23:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:13.114 nvme0n1 00:12:13.114 00:23:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:13.114 00:23:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:13.686 nvme1n2 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:13.686 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 468a3ff5-4561-4d5e-98ee-08d7dfd9e21d == \4\6\8\a\3\f\f\5\-\4\5\6\1\-\4\d\5\e\-\9\8\e\e\-\0\8\d\7\d\f\d\9\e\2\1\d ]] 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1879c507-0aef-4fd4-be7f-1a3443b1bfb7 == \1\8\7\9\c\5\0\7\-\0\a\e\f\-\4\f\d\4\-\b\e\7\f\-\1\a\3\4\4\3\b\1\b\f\b\7 ]] 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 983938 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 983938 ']' 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 983938 00:12:13.947 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 983938 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 983938' 00:12:14.206 killing process with pid 983938 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 983938 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 983938 00:12:14.206 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.492 00:23:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.492 rmmod nvme_tcp 00:12:14.492 rmmod nvme_fabrics 00:12:14.492 rmmod nvme_keyring 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 981461 ']' 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 981461 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 981461 ']' 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 981461 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.492 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 981461 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 981461' 00:12:14.753 killing process with pid 981461 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 981461 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 981461 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.753 00:23:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.299 00:23:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:17.299 00:12:17.299 real 0m25.156s 00:12:17.299 user 0m24.575s 00:12:17.299 sys 0m7.917s 00:12:17.299 00:23:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.299 00:23:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:17.299 ************************************ 00:12:17.299 END TEST nvmf_ns_masking 00:12:17.299 ************************************ 00:12:17.299 00:23:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:17.299 00:23:30 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:17.299 00:23:30 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:17.299 00:23:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:17.299 00:23:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.299 00:23:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:17.299 ************************************ 00:12:17.299 START TEST nvmf_nvme_cli 00:12:17.299 ************************************ 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:17.299 * Looking for test storage... 00:12:17.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.299 00:23:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:25.443 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:25.444 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:25.444 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:25.444 Found net devices under 0000:31:00.0: cvl_0_0 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:25.444 Found net devices under 0000:31:00.1: cvl_0_1 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:25.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.823 ms 00:12:25.444 00:12:25.444 --- 10.0.0.2 ping statistics --- 00:12:25.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.444 rtt min/avg/max/mdev = 0.823/0.823/0.823/0.000 ms 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:12:25.444 00:12:25.444 --- 10.0.0.1 ping statistics --- 00:12:25.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.444 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.444 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=989327 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 989327 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 989327 ']' 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.445 00:23:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:25.445 [2024-07-16 00:23:38.941431] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:25.445 [2024-07-16 00:23:38.941502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.445 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.445 [2024-07-16 00:23:39.019787] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.706 [2024-07-16 00:23:39.095191] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.706 [2024-07-16 00:23:39.095237] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.706 [2024-07-16 00:23:39.095245] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.706 [2024-07-16 00:23:39.095251] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.706 [2024-07-16 00:23:39.095257] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.706 [2024-07-16 00:23:39.095501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.706 [2024-07-16 00:23:39.095674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.706 [2024-07-16 00:23:39.095832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.706 [2024-07-16 00:23:39.095832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.276 [2024-07-16 00:23:39.765818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.276 Malloc0 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.276 Malloc1 00:12:26.276 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.277 [2024-07-16 00:23:39.855666] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.277 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:26.538 00:12:26.538 Discovery Log Number of Records 2, Generation counter 2 00:12:26.538 =====Discovery Log Entry 0====== 00:12:26.538 trtype: tcp 00:12:26.538 adrfam: ipv4 00:12:26.538 subtype: current discovery subsystem 00:12:26.538 treq: not required 00:12:26.538 portid: 0 00:12:26.538 trsvcid: 4420 00:12:26.538 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:26.538 traddr: 10.0.0.2 00:12:26.538 eflags: explicit discovery connections, duplicate discovery information 00:12:26.538 sectype: none 00:12:26.538 =====Discovery Log Entry 1====== 00:12:26.538 trtype: tcp 00:12:26.538 adrfam: ipv4 00:12:26.538 subtype: nvme subsystem 00:12:26.538 treq: not required 00:12:26.538 portid: 0 00:12:26.538 trsvcid: 4420 00:12:26.538 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:26.538 traddr: 10.0.0.2 00:12:26.538 eflags: none 00:12:26.538 sectype: none 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:26.538 00:23:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.994 00:23:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:27.994 00:23:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.994 00:23:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.994 00:23:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:27.994 00:23:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:27.994 00:23:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:29.924 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:30.184 /dev/nvme0n1 ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:30.184 00:23:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.444 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.444 rmmod nvme_tcp 00:12:30.705 rmmod nvme_fabrics 00:12:30.705 rmmod nvme_keyring 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 989327 ']' 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 989327 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 989327 ']' 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 989327 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 989327 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 989327' 00:12:30.705 killing process with pid 989327 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 989327 00:12:30.705 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 989327 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.972 00:23:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.888 00:23:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.888 00:12:32.888 real 0m15.971s 00:12:32.888 user 0m23.289s 00:12:32.888 sys 0m6.722s 00:12:32.888 00:23:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.888 00:23:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.888 ************************************ 00:12:32.888 END TEST nvmf_nvme_cli 00:12:32.888 ************************************ 00:12:32.888 00:23:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:32.888 00:23:46 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:32.888 00:23:46 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:32.888 00:23:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:32.888 00:23:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.888 00:23:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.888 ************************************ 00:12:32.888 START TEST nvmf_vfio_user 00:12:32.888 ************************************ 00:12:32.888 00:23:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:33.148 * Looking for test storage... 00:12:33.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=990885 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 990885' 00:12:33.148 Process pid: 990885 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 990885 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 990885 ']' 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.148 00:23:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:33.148 [2024-07-16 00:23:46.693939] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:33.148 [2024-07-16 00:23:46.694010] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.148 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.148 [2024-07-16 00:23:46.768772] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.408 [2024-07-16 00:23:46.844408] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.408 [2024-07-16 00:23:46.844451] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.408 [2024-07-16 00:23:46.844458] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.408 [2024-07-16 00:23:46.844465] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.408 [2024-07-16 00:23:46.844474] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.408 [2024-07-16 00:23:46.844611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.408 [2024-07-16 00:23:46.844735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.408 [2024-07-16 00:23:46.844893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.408 [2024-07-16 00:23:46.844894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.978 00:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.978 00:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:33.978 00:23:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:34.917 00:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:35.178 00:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:35.178 00:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:35.178 00:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.178 00:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:35.178 00:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:35.438 Malloc1 00:12:35.438 00:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:35.438 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:35.699 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:35.960 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.960 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:35.960 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:35.960 Malloc2 00:12:35.960 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:36.220 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:36.480 00:23:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:36.480 00:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:36.480 00:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:36.480 00:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:36.480 00:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:36.480 00:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:36.480 00:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:36.480 [2024-07-16 00:23:50.046922] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:36.480 [2024-07-16 00:23:50.046965] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991542 ] 00:12:36.480 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.480 [2024-07-16 00:23:50.079895] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:36.480 [2024-07-16 00:23:50.088538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:36.480 [2024-07-16 00:23:50.088559] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcd202b5000 00:12:36.480 [2024-07-16 00:23:50.089541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.090539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.091542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.092547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.093549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.094552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.095558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.096568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.480 [2024-07-16 00:23:50.097582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:36.480 [2024-07-16 00:23:50.097592] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcd202aa000 00:12:36.480 [2024-07-16 00:23:50.098923] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:36.742 [2024-07-16 00:23:50.115849] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:36.743 [2024-07-16 00:23:50.115877] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:36.743 [2024-07-16 00:23:50.120713] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:36.743 [2024-07-16 00:23:50.120761] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:36.743 [2024-07-16 00:23:50.120846] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:36.743 [2024-07-16 00:23:50.120862] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:36.743 [2024-07-16 00:23:50.120867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:36.743 [2024-07-16 00:23:50.121712] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:36.743 [2024-07-16 00:23:50.121721] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:36.743 [2024-07-16 00:23:50.121729] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:36.743 [2024-07-16 00:23:50.122716] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:36.743 [2024-07-16 00:23:50.122725] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:36.743 [2024-07-16 00:23:50.122733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:36.743 [2024-07-16 00:23:50.123718] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:36.743 [2024-07-16 00:23:50.123726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:36.743 [2024-07-16 00:23:50.124721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:36.743 [2024-07-16 00:23:50.124730] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:36.743 [2024-07-16 00:23:50.124735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:36.743 [2024-07-16 00:23:50.124742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:36.743 [2024-07-16 00:23:50.124848] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:36.743 [2024-07-16 00:23:50.124853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:36.743 [2024-07-16 00:23:50.124858] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:36.743 [2024-07-16 00:23:50.125729] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:36.743 [2024-07-16 00:23:50.126736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:36.743 [2024-07-16 00:23:50.127741] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:36.743 [2024-07-16 00:23:50.128741] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.743 [2024-07-16 00:23:50.128811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:36.743 [2024-07-16 00:23:50.129750] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:36.743 [2024-07-16 00:23:50.129757] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:36.743 [2024-07-16 00:23:50.129762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.129783] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:36.743 [2024-07-16 00:23:50.129791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.129806] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:36.743 [2024-07-16 00:23:50.129811] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.743 [2024-07-16 00:23:50.129824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.743 [2024-07-16 00:23:50.129859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:36.743 [2024-07-16 00:23:50.129868] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:36.743 [2024-07-16 00:23:50.129873] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:36.743 [2024-07-16 00:23:50.129877] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:36.743 [2024-07-16 00:23:50.129882] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:36.743 [2024-07-16 00:23:50.129886] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:36.743 [2024-07-16 00:23:50.129891] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:36.743 [2024-07-16 00:23:50.129895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.129903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.129914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:36.743 [2024-07-16 00:23:50.129926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:36.743 [2024-07-16 00:23:50.129936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.743 [2024-07-16 00:23:50.129945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.743 [2024-07-16 00:23:50.129953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.743 [2024-07-16 00:23:50.129961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.743 [2024-07-16 00:23:50.129966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.129975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.129984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:36.743 [2024-07-16 00:23:50.129993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:36.743 [2024-07-16 00:23:50.129998] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:36.743 [2024-07-16 00:23:50.130004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:36.743 [2024-07-16 00:23:50.130039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:36.743 [2024-07-16 00:23:50.130103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130119] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:36.743 [2024-07-16 00:23:50.130123] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:36.743 [2024-07-16 00:23:50.130129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:36.743 [2024-07-16 00:23:50.130140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:36.743 [2024-07-16 00:23:50.130154] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:36.743 [2024-07-16 00:23:50.130163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130178] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:36.743 [2024-07-16 00:23:50.130182] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.743 [2024-07-16 00:23:50.130188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.743 [2024-07-16 00:23:50.130206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:36.743 [2024-07-16 00:23:50.130218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130237] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:36.743 [2024-07-16 00:23:50.130241] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.743 [2024-07-16 00:23:50.130247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.743 [2024-07-16 00:23:50.130255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:36.743 [2024-07-16 00:23:50.130262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:36.743 [2024-07-16 00:23:50.130282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:36.744 [2024-07-16 00:23:50.130287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:36.744 [2024-07-16 00:23:50.130292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:36.744 [2024-07-16 00:23:50.130299] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:36.744 [2024-07-16 00:23:50.130303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:36.744 [2024-07-16 00:23:50.130308] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:36.744 [2024-07-16 00:23:50.130325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:36.744 [2024-07-16 00:23:50.130336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:36.744 [2024-07-16 00:23:50.130348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:36.744 [2024-07-16 00:23:50.130355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:36.744 [2024-07-16 00:23:50.130365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:36.744 [2024-07-16 00:23:50.130377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:36.744 [2024-07-16 00:23:50.130388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:36.744 [2024-07-16 00:23:50.130395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:36.744 [2024-07-16 00:23:50.130408] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:36.744 [2024-07-16 00:23:50.130412] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:36.744 [2024-07-16 00:23:50.130416] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:36.744 [2024-07-16 00:23:50.130419] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:36.744 [2024-07-16 00:23:50.130426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:36.744 [2024-07-16 00:23:50.130433] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:36.744 [2024-07-16 00:23:50.130437] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:36.744 [2024-07-16 00:23:50.130443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:36.744 [2024-07-16 00:23:50.130451] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:36.744 [2024-07-16 00:23:50.130455] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.744 [2024-07-16 00:23:50.130461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.744 [2024-07-16 00:23:50.130469] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:36.744 [2024-07-16 00:23:50.130473] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:36.744 [2024-07-16 00:23:50.130479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:36.744 [2024-07-16 00:23:50.130486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:36.744 [2024-07-16 00:23:50.130594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:36.744 [2024-07-16 00:23:50.130605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:36.744 [2024-07-16 00:23:50.130613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:36.744 ===================================================== 00:12:36.744 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.744 ===================================================== 00:12:36.744 Controller Capabilities/Features 00:12:36.744 ================================ 00:12:36.744 Vendor ID: 4e58 00:12:36.744 Subsystem Vendor ID: 4e58 00:12:36.744 Serial Number: SPDK1 00:12:36.744 Model Number: SPDK bdev Controller 00:12:36.744 Firmware Version: 24.09 00:12:36.744 Recommended Arb Burst: 6 00:12:36.744 IEEE OUI Identifier: 8d 6b 50 00:12:36.744 Multi-path I/O 00:12:36.744 May have multiple subsystem ports: Yes 00:12:36.744 May have multiple controllers: Yes 00:12:36.744 Associated with SR-IOV VF: No 00:12:36.744 Max Data Transfer Size: 131072 00:12:36.744 Max Number of Namespaces: 32 00:12:36.744 Max Number of I/O Queues: 127 00:12:36.744 NVMe Specification Version (VS): 1.3 00:12:36.744 NVMe Specification Version (Identify): 1.3 00:12:36.744 Maximum Queue Entries: 256 00:12:36.744 Contiguous Queues Required: Yes 00:12:36.744 Arbitration Mechanisms Supported 00:12:36.744 Weighted Round Robin: Not Supported 00:12:36.744 Vendor Specific: Not Supported 00:12:36.744 Reset Timeout: 15000 ms 00:12:36.744 Doorbell Stride: 4 bytes 00:12:36.744 NVM Subsystem Reset: Not Supported 00:12:36.744 Command Sets Supported 00:12:36.744 NVM Command Set: Supported 00:12:36.744 Boot Partition: Not Supported 00:12:36.744 Memory Page Size Minimum: 4096 bytes 00:12:36.744 Memory Page Size Maximum: 4096 bytes 00:12:36.744 Persistent Memory Region: Not Supported 00:12:36.744 Optional Asynchronous Events Supported 00:12:36.744 Namespace Attribute Notices: Supported 00:12:36.744 Firmware Activation Notices: Not Supported 00:12:36.744 ANA Change Notices: Not Supported 00:12:36.744 PLE Aggregate Log Change Notices: Not Supported 00:12:36.744 LBA Status Info Alert Notices: Not Supported 00:12:36.744 EGE Aggregate Log Change Notices: Not Supported 00:12:36.744 Normal NVM Subsystem Shutdown event: Not Supported 00:12:36.744 Zone Descriptor Change Notices: Not Supported 00:12:36.744 Discovery Log Change Notices: Not Supported 00:12:36.744 Controller Attributes 00:12:36.744 128-bit Host Identifier: Supported 00:12:36.744 Non-Operational Permissive Mode: Not Supported 00:12:36.744 NVM Sets: Not Supported 00:12:36.744 Read Recovery Levels: Not Supported 00:12:36.744 Endurance Groups: Not Supported 00:12:36.744 Predictable Latency Mode: Not Supported 00:12:36.744 Traffic Based Keep ALive: Not Supported 00:12:36.744 Namespace Granularity: Not Supported 00:12:36.744 SQ Associations: Not Supported 00:12:36.744 UUID List: Not Supported 00:12:36.744 Multi-Domain Subsystem: Not Supported 00:12:36.744 Fixed Capacity Management: Not Supported 00:12:36.744 Variable Capacity Management: Not Supported 00:12:36.744 Delete Endurance Group: Not Supported 00:12:36.744 Delete NVM Set: Not Supported 00:12:36.744 Extended LBA Formats Supported: Not Supported 00:12:36.744 Flexible Data Placement Supported: Not Supported 00:12:36.744 00:12:36.744 Controller Memory Buffer Support 00:12:36.744 ================================ 00:12:36.744 Supported: No 00:12:36.744 00:12:36.744 Persistent Memory Region Support 00:12:36.744 ================================ 00:12:36.744 Supported: No 00:12:36.744 00:12:36.744 Admin Command Set Attributes 00:12:36.744 ============================ 00:12:36.744 Security Send/Receive: Not Supported 00:12:36.744 Format NVM: Not Supported 00:12:36.744 Firmware Activate/Download: Not Supported 00:12:36.744 Namespace Management: Not Supported 00:12:36.744 Device Self-Test: Not Supported 00:12:36.744 Directives: Not Supported 00:12:36.744 NVMe-MI: Not Supported 00:12:36.744 Virtualization Management: Not Supported 00:12:36.744 Doorbell Buffer Config: Not Supported 00:12:36.744 Get LBA Status Capability: Not Supported 00:12:36.744 Command & Feature Lockdown Capability: Not Supported 00:12:36.744 Abort Command Limit: 4 00:12:36.744 Async Event Request Limit: 4 00:12:36.744 Number of Firmware Slots: N/A 00:12:36.744 Firmware Slot 1 Read-Only: N/A 00:12:36.744 Firmware Activation Without Reset: N/A 00:12:36.744 Multiple Update Detection Support: N/A 00:12:36.744 Firmware Update Granularity: No Information Provided 00:12:36.744 Per-Namespace SMART Log: No 00:12:36.744 Asymmetric Namespace Access Log Page: Not Supported 00:12:36.744 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:36.744 Command Effects Log Page: Supported 00:12:36.744 Get Log Page Extended Data: Supported 00:12:36.744 Telemetry Log Pages: Not Supported 00:12:36.744 Persistent Event Log Pages: Not Supported 00:12:36.744 Supported Log Pages Log Page: May Support 00:12:36.744 Commands Supported & Effects Log Page: Not Supported 00:12:36.744 Feature Identifiers & Effects Log Page:May Support 00:12:36.744 NVMe-MI Commands & Effects Log Page: May Support 00:12:36.744 Data Area 4 for Telemetry Log: Not Supported 00:12:36.744 Error Log Page Entries Supported: 128 00:12:36.744 Keep Alive: Supported 00:12:36.744 Keep Alive Granularity: 10000 ms 00:12:36.744 00:12:36.744 NVM Command Set Attributes 00:12:36.744 ========================== 00:12:36.744 Submission Queue Entry Size 00:12:36.744 Max: 64 00:12:36.744 Min: 64 00:12:36.744 Completion Queue Entry Size 00:12:36.744 Max: 16 00:12:36.744 Min: 16 00:12:36.744 Number of Namespaces: 32 00:12:36.744 Compare Command: Supported 00:12:36.744 Write Uncorrectable Command: Not Supported 00:12:36.744 Dataset Management Command: Supported 00:12:36.744 Write Zeroes Command: Supported 00:12:36.744 Set Features Save Field: Not Supported 00:12:36.744 Reservations: Not Supported 00:12:36.744 Timestamp: Not Supported 00:12:36.744 Copy: Supported 00:12:36.744 Volatile Write Cache: Present 00:12:36.744 Atomic Write Unit (Normal): 1 00:12:36.744 Atomic Write Unit (PFail): 1 00:12:36.744 Atomic Compare & Write Unit: 1 00:12:36.744 Fused Compare & Write: Supported 00:12:36.744 Scatter-Gather List 00:12:36.744 SGL Command Set: Supported (Dword aligned) 00:12:36.744 SGL Keyed: Not Supported 00:12:36.744 SGL Bit Bucket Descriptor: Not Supported 00:12:36.745 SGL Metadata Pointer: Not Supported 00:12:36.745 Oversized SGL: Not Supported 00:12:36.745 SGL Metadata Address: Not Supported 00:12:36.745 SGL Offset: Not Supported 00:12:36.745 Transport SGL Data Block: Not Supported 00:12:36.745 Replay Protected Memory Block: Not Supported 00:12:36.745 00:12:36.745 Firmware Slot Information 00:12:36.745 ========================= 00:12:36.745 Active slot: 1 00:12:36.745 Slot 1 Firmware Revision: 24.09 00:12:36.745 00:12:36.745 00:12:36.745 Commands Supported and Effects 00:12:36.745 ============================== 00:12:36.745 Admin Commands 00:12:36.745 -------------- 00:12:36.745 Get Log Page (02h): Supported 00:12:36.745 Identify (06h): Supported 00:12:36.745 Abort (08h): Supported 00:12:36.745 Set Features (09h): Supported 00:12:36.745 Get Features (0Ah): Supported 00:12:36.745 Asynchronous Event Request (0Ch): Supported 00:12:36.745 Keep Alive (18h): Supported 00:12:36.745 I/O Commands 00:12:36.745 ------------ 00:12:36.745 Flush (00h): Supported LBA-Change 00:12:36.745 Write (01h): Supported LBA-Change 00:12:36.745 Read (02h): Supported 00:12:36.745 Compare (05h): Supported 00:12:36.745 Write Zeroes (08h): Supported LBA-Change 00:12:36.745 Dataset Management (09h): Supported LBA-Change 00:12:36.745 Copy (19h): Supported LBA-Change 00:12:36.745 00:12:36.745 Error Log 00:12:36.745 ========= 00:12:36.745 00:12:36.745 Arbitration 00:12:36.745 =========== 00:12:36.745 Arbitration Burst: 1 00:12:36.745 00:12:36.745 Power Management 00:12:36.745 ================ 00:12:36.745 Number of Power States: 1 00:12:36.745 Current Power State: Power State #0 00:12:36.745 Power State #0: 00:12:36.745 Max Power: 0.00 W 00:12:36.745 Non-Operational State: Operational 00:12:36.745 Entry Latency: Not Reported 00:12:36.745 Exit Latency: Not Reported 00:12:36.745 Relative Read Throughput: 0 00:12:36.745 Relative Read Latency: 0 00:12:36.745 Relative Write Throughput: 0 00:12:36.745 Relative Write Latency: 0 00:12:36.745 Idle Power: Not Reported 00:12:36.745 Active Power: Not Reported 00:12:36.745 Non-Operational Permissive Mode: Not Supported 00:12:36.745 00:12:36.745 Health Information 00:12:36.745 ================== 00:12:36.745 Critical Warnings: 00:12:36.745 Available Spare Space: OK 00:12:36.745 Temperature: OK 00:12:36.745 Device Reliability: OK 00:12:36.745 Read Only: No 00:12:36.745 Volatile Memory Backup: OK 00:12:36.745 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:36.745 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:36.745 Available Spare: 0% 00:12:36.745 Available Sp[2024-07-16 00:23:50.130713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:36.745 [2024-07-16 00:23:50.130721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:36.745 [2024-07-16 00:23:50.130747] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:36.745 [2024-07-16 00:23:50.130756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.745 [2024-07-16 00:23:50.130762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.745 [2024-07-16 00:23:50.130769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.745 [2024-07-16 00:23:50.130775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.745 [2024-07-16 00:23:50.131765] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:36.745 [2024-07-16 00:23:50.131775] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:36.745 [2024-07-16 00:23:50.132768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.745 [2024-07-16 00:23:50.132807] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:36.745 [2024-07-16 00:23:50.132814] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:36.745 [2024-07-16 00:23:50.133776] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:36.745 [2024-07-16 00:23:50.133786] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:36.745 [2024-07-16 00:23:50.133853] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:36.745 [2024-07-16 00:23:50.139240] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:36.745 are Threshold: 0% 00:12:36.745 Life Percentage Used: 0% 00:12:36.745 Data Units Read: 0 00:12:36.745 Data Units Written: 0 00:12:36.745 Host Read Commands: 0 00:12:36.745 Host Write Commands: 0 00:12:36.745 Controller Busy Time: 0 minutes 00:12:36.745 Power Cycles: 0 00:12:36.745 Power On Hours: 0 hours 00:12:36.745 Unsafe Shutdowns: 0 00:12:36.745 Unrecoverable Media Errors: 0 00:12:36.745 Lifetime Error Log Entries: 0 00:12:36.745 Warning Temperature Time: 0 minutes 00:12:36.745 Critical Temperature Time: 0 minutes 00:12:36.745 00:12:36.745 Number of Queues 00:12:36.745 ================ 00:12:36.745 Number of I/O Submission Queues: 127 00:12:36.745 Number of I/O Completion Queues: 127 00:12:36.745 00:12:36.745 Active Namespaces 00:12:36.745 ================= 00:12:36.745 Namespace ID:1 00:12:36.745 Error Recovery Timeout: Unlimited 00:12:36.745 Command Set Identifier: NVM (00h) 00:12:36.745 Deallocate: Supported 00:12:36.745 Deallocated/Unwritten Error: Not Supported 00:12:36.745 Deallocated Read Value: Unknown 00:12:36.745 Deallocate in Write Zeroes: Not Supported 00:12:36.745 Deallocated Guard Field: 0xFFFF 00:12:36.745 Flush: Supported 00:12:36.745 Reservation: Supported 00:12:36.745 Namespace Sharing Capabilities: Multiple Controllers 00:12:36.745 Size (in LBAs): 131072 (0GiB) 00:12:36.745 Capacity (in LBAs): 131072 (0GiB) 00:12:36.745 Utilization (in LBAs): 131072 (0GiB) 00:12:36.745 NGUID: 6A60092579594D6592743B010670D04B 00:12:36.745 UUID: 6a600925-7959-4d65-9274-3b010670d04b 00:12:36.745 Thin Provisioning: Not Supported 00:12:36.745 Per-NS Atomic Units: Yes 00:12:36.745 Atomic Boundary Size (Normal): 0 00:12:36.745 Atomic Boundary Size (PFail): 0 00:12:36.745 Atomic Boundary Offset: 0 00:12:36.745 Maximum Single Source Range Length: 65535 00:12:36.745 Maximum Copy Length: 65535 00:12:36.745 Maximum Source Range Count: 1 00:12:36.745 NGUID/EUI64 Never Reused: No 00:12:36.745 Namespace Write Protected: No 00:12:36.745 Number of LBA Formats: 1 00:12:36.745 Current LBA Format: LBA Format #00 00:12:36.745 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:36.745 00:12:36.745 00:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:36.745 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.745 [2024-07-16 00:23:50.325867] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.026 Initializing NVMe Controllers 00:12:42.026 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:42.026 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:42.026 Initialization complete. Launching workers. 00:12:42.026 ======================================================== 00:12:42.026 Latency(us) 00:12:42.026 Device Information : IOPS MiB/s Average min max 00:12:42.026 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39948.83 156.05 3203.97 849.65 10773.82 00:12:42.026 ======================================================== 00:12:42.026 Total : 39948.83 156.05 3203.97 849.65 10773.82 00:12:42.026 00:12:42.026 [2024-07-16 00:23:55.344162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.026 00:23:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:42.026 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.026 [2024-07-16 00:23:55.523011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:47.304 Initializing NVMe Controllers 00:12:47.304 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:47.304 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:47.304 Initialization complete. Launching workers. 00:12:47.304 ======================================================== 00:12:47.304 Latency(us) 00:12:47.304 Device Information : IOPS MiB/s Average min max 00:12:47.304 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.52 7626.38 8018.69 00:12:47.304 ======================================================== 00:12:47.304 Total : 16051.20 62.70 7980.52 7626.38 8018.69 00:12:47.304 00:12:47.304 [2024-07-16 00:24:00.557580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:47.304 00:24:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:47.304 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.304 [2024-07-16 00:24:00.750452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:52.587 [2024-07-16 00:24:05.810381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:52.587 Initializing NVMe Controllers 00:12:52.587 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:52.587 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:52.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:52.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:52.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:52.587 Initialization complete. Launching workers. 00:12:52.587 Starting thread on core 2 00:12:52.587 Starting thread on core 3 00:12:52.587 Starting thread on core 1 00:12:52.587 00:24:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:52.587 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.587 [2024-07-16 00:24:06.076196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:55.904 [2024-07-16 00:24:09.138604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:55.904 Initializing NVMe Controllers 00:12:55.904 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.904 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.904 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:55.904 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:55.904 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:55.904 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:55.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:55.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:55.905 Initialization complete. Launching workers. 00:12:55.905 Starting thread on core 1 with urgent priority queue 00:12:55.905 Starting thread on core 2 with urgent priority queue 00:12:55.905 Starting thread on core 3 with urgent priority queue 00:12:55.905 Starting thread on core 0 with urgent priority queue 00:12:55.905 SPDK bdev Controller (SPDK1 ) core 0: 7642.67 IO/s 13.08 secs/100000 ios 00:12:55.905 SPDK bdev Controller (SPDK1 ) core 1: 15557.67 IO/s 6.43 secs/100000 ios 00:12:55.905 SPDK bdev Controller (SPDK1 ) core 2: 9661.67 IO/s 10.35 secs/100000 ios 00:12:55.905 SPDK bdev Controller (SPDK1 ) core 3: 14928.67 IO/s 6.70 secs/100000 ios 00:12:55.905 ======================================================== 00:12:55.905 00:12:55.905 00:24:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:55.905 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.905 [2024-07-16 00:24:09.412711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:55.905 Initializing NVMe Controllers 00:12:55.905 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.905 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.905 Namespace ID: 1 size: 0GB 00:12:55.905 Initialization complete. 00:12:55.905 INFO: using host memory buffer for IO 00:12:55.905 Hello world! 00:12:55.905 [2024-07-16 00:24:09.446913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:55.905 00:24:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:56.165 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.165 [2024-07-16 00:24:09.713655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.107 Initializing NVMe Controllers 00:12:57.107 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.107 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.108 Initialization complete. Launching workers. 00:12:57.108 submit (in ns) avg, min, max = 7464.7, 3924.2, 4006936.7 00:12:57.108 complete (in ns) avg, min, max = 18635.7, 2453.3, 4995869.2 00:12:57.108 00:12:57.108 Submit histogram 00:12:57.108 ================ 00:12:57.108 Range in us Cumulative Count 00:12:57.108 3.920 - 3.947: 0.8498% ( 162) 00:12:57.108 3.947 - 3.973: 4.5638% ( 708) 00:12:57.108 3.973 - 4.000: 12.5007% ( 1513) 00:12:57.108 4.000 - 4.027: 23.8787% ( 2169) 00:12:57.108 4.027 - 4.053: 36.2902% ( 2366) 00:12:57.108 4.053 - 4.080: 49.4046% ( 2500) 00:12:57.108 4.080 - 4.107: 66.3537% ( 3231) 00:12:57.108 4.107 - 4.133: 79.5677% ( 2519) 00:12:57.108 4.133 - 4.160: 88.3072% ( 1666) 00:12:57.108 4.160 - 4.187: 93.1700% ( 927) 00:12:57.108 4.187 - 4.213: 95.7509% ( 492) 00:12:57.108 4.213 - 4.240: 96.7319% ( 187) 00:12:57.108 4.240 - 4.267: 97.0834% ( 67) 00:12:57.108 4.267 - 4.293: 97.2617% ( 34) 00:12:57.108 4.293 - 4.320: 97.3404% ( 15) 00:12:57.108 4.320 - 4.347: 97.4243% ( 16) 00:12:57.108 4.347 - 4.373: 97.5030% ( 15) 00:12:57.108 4.373 - 4.400: 97.5817% ( 15) 00:12:57.108 4.400 - 4.427: 97.6289% ( 9) 00:12:57.108 4.427 - 4.453: 97.6919% ( 12) 00:12:57.108 4.453 - 4.480: 97.7181% ( 5) 00:12:57.108 4.480 - 4.507: 97.7810% ( 12) 00:12:57.108 4.507 - 4.533: 97.8125% ( 6) 00:12:57.108 4.533 - 4.560: 97.8492% ( 7) 00:12:57.108 4.560 - 4.587: 97.8755% ( 5) 00:12:57.108 4.587 - 4.613: 97.9279% ( 10) 00:12:57.108 4.613 - 4.640: 97.9804% ( 10) 00:12:57.108 4.640 - 4.667: 98.0381% ( 11) 00:12:57.108 4.667 - 4.693: 98.1063% ( 13) 00:12:57.108 4.693 - 4.720: 98.1955% ( 17) 00:12:57.108 4.720 - 4.747: 98.2532% ( 11) 00:12:57.108 4.747 - 4.773: 98.3318% ( 15) 00:12:57.108 4.773 - 4.800: 98.4682% ( 26) 00:12:57.108 4.800 - 4.827: 98.5679% ( 19) 00:12:57.108 4.827 - 4.853: 98.6466% ( 15) 00:12:57.108 4.853 - 4.880: 98.7043% ( 11) 00:12:57.108 4.880 - 4.907: 98.7725% ( 13) 00:12:57.108 4.907 - 4.933: 98.8722% ( 19) 00:12:57.108 4.933 - 4.960: 98.9404% ( 13) 00:12:57.108 4.960 - 4.987: 99.0138% ( 14) 00:12:57.108 4.987 - 5.013: 99.0663% ( 10) 00:12:57.108 5.013 - 5.040: 99.0977% ( 6) 00:12:57.108 5.040 - 5.067: 99.1449% ( 9) 00:12:57.108 5.067 - 5.093: 99.1817% ( 7) 00:12:57.108 5.093 - 5.120: 99.2341% ( 10) 00:12:57.108 5.120 - 5.147: 99.2551% ( 4) 00:12:57.108 5.147 - 5.173: 99.2656% ( 2) 00:12:57.108 5.173 - 5.200: 99.2866% ( 4) 00:12:57.108 5.200 - 5.227: 99.3076% ( 4) 00:12:57.108 5.253 - 5.280: 99.3128% ( 1) 00:12:57.108 5.307 - 5.333: 99.3390% ( 5) 00:12:57.108 5.333 - 5.360: 99.3443% ( 1) 00:12:57.108 5.387 - 5.413: 99.3495% ( 1) 00:12:57.108 5.440 - 5.467: 99.3653% ( 3) 00:12:57.108 5.520 - 5.547: 99.3705% ( 1) 00:12:57.108 5.573 - 5.600: 99.3758% ( 1) 00:12:57.108 5.600 - 5.627: 99.3810% ( 1) 00:12:57.108 5.627 - 5.653: 99.3967% ( 3) 00:12:57.108 5.653 - 5.680: 99.4020% ( 1) 00:12:57.108 5.760 - 5.787: 99.4072% ( 1) 00:12:57.108 5.840 - 5.867: 99.4177% ( 2) 00:12:57.108 5.867 - 5.893: 99.4335% ( 3) 00:12:57.108 5.973 - 6.000: 99.4439% ( 2) 00:12:57.108 6.053 - 6.080: 99.4492% ( 1) 00:12:57.108 6.080 - 6.107: 99.4544% ( 1) 00:12:57.108 6.107 - 6.133: 99.4754% ( 4) 00:12:57.108 6.133 - 6.160: 99.4807% ( 1) 00:12:57.108 6.160 - 6.187: 99.4912% ( 2) 00:12:57.108 6.187 - 6.213: 99.5017% ( 2) 00:12:57.108 6.267 - 6.293: 99.5121% ( 2) 00:12:57.108 6.293 - 6.320: 99.5174% ( 1) 00:12:57.108 6.320 - 6.347: 99.5384% ( 4) 00:12:57.108 6.347 - 6.373: 99.5594% ( 4) 00:12:57.108 6.400 - 6.427: 99.5646% ( 1) 00:12:57.108 6.427 - 6.453: 99.5698% ( 1) 00:12:57.108 6.507 - 6.533: 99.5751% ( 1) 00:12:57.108 6.560 - 6.587: 99.5803% ( 1) 00:12:57.108 6.720 - 6.747: 99.5856% ( 1) 00:12:57.108 6.800 - 6.827: 99.5908% ( 1) 00:12:57.108 6.933 - 6.987: 99.6013% ( 2) 00:12:57.108 6.987 - 7.040: 99.6066% ( 1) 00:12:57.108 7.040 - 7.093: 99.6118% ( 1) 00:12:57.108 7.200 - 7.253: 99.6171% ( 1) 00:12:57.108 7.307 - 7.360: 99.6223% ( 1) 00:12:57.108 7.467 - 7.520: 99.6276% ( 1) 00:12:57.108 7.627 - 7.680: 99.6328% ( 1) 00:12:57.108 7.680 - 7.733: 99.6380% ( 1) 00:12:57.108 7.733 - 7.787: 99.6485% ( 2) 00:12:57.108 7.787 - 7.840: 99.6590% ( 2) 00:12:57.108 7.840 - 7.893: 99.6695% ( 2) 00:12:57.108 7.947 - 8.000: 99.6748% ( 1) 00:12:57.108 8.000 - 8.053: 99.6905% ( 3) 00:12:57.108 8.053 - 8.107: 99.7062% ( 3) 00:12:57.108 8.213 - 8.267: 99.7115% ( 1) 00:12:57.108 8.320 - 8.373: 99.7220% ( 2) 00:12:57.108 8.533 - 8.587: 99.7272% ( 1) 00:12:57.108 8.587 - 8.640: 99.7482% ( 4) 00:12:57.108 8.693 - 8.747: 99.7587% ( 2) 00:12:57.108 8.747 - 8.800: 99.7692% ( 2) 00:12:57.108 8.800 - 8.853: 99.7744% ( 1) 00:12:57.108 8.853 - 8.907: 99.7849% ( 2) 00:12:57.108 8.907 - 8.960: 99.8007% ( 3) 00:12:57.108 8.960 - 9.013: 99.8059% ( 1) 00:12:57.108 9.013 - 9.067: 99.8112% ( 1) 00:12:57.108 9.120 - 9.173: 99.8216% ( 2) 00:12:57.108 9.173 - 9.227: 99.8321% ( 2) 00:12:57.108 9.280 - 9.333: 99.8374% ( 1) 00:12:57.108 9.387 - 9.440: 99.8426% ( 1) 00:12:57.108 9.493 - 9.547: 99.8479% ( 1) 00:12:57.108 9.547 - 9.600: 99.8531% ( 1) 00:12:57.108 9.600 - 9.653: 99.8584% ( 1) 00:12:57.108 9.707 - 9.760: 99.8636% ( 1) 00:12:57.108 9.760 - 9.813: 99.8689% ( 1) 00:12:57.108 9.920 - 9.973: 99.8793% ( 2) 00:12:57.108 9.973 - 10.027: 99.8846% ( 1) 00:12:57.108 10.240 - 10.293: 99.8951% ( 2) 00:12:57.108 10.400 - 10.453: 99.9003% ( 1) 00:12:57.108 10.507 - 10.560: 99.9056% ( 1) 00:12:57.108 13.973 - 14.080: 99.9108% ( 1) 00:12:57.108 19.733 - 19.840: 99.9161% ( 1) 00:12:57.108 3986.773 - 4014.080: 100.0000% ( 16) 00:12:57.108 00:12:57.108 Complete histogram 00:12:57.108 ================== 00:12:57.108 Range in us Cumulative Count 00:12:57.108 2.453 - 2.467: 0.4879% ( 93) 00:12:57.108 2.467 - 2.480: 1.8150% ( 253) 00:12:57.108 2.480 - 2.493: 2.6911% ( 167) 00:12:57.108 2.493 - 2.507: 3.1789% ( 93) 00:12:57.108 2.507 - 2.520: 3.6196% ( 84) 00:12:57.108 2.520 - 2.533: 10.4128% ( 1295) 00:12:57.108 2.533 - 2.547: 33.5204% ( 4405) 00:12:57.108 2.547 - 2.560: 54.2097% ( 3944) 00:12:57.108 2.560 - 2.573: 71.2637% ( 3251) 00:12:57.108 2.573 - 2.587: 85.4692% ( 2708) 00:12:57.108 2.587 - 2.600: 92.8133% ( 1400) 00:12:57.108 2.600 - 2.613: 95.7772% ( 565) 00:12:57.108 2.613 - 2.627: 96.7319% ( 182) 00:12:57.108 2.627 - 2.640: 97.0466% ( 60) 00:12:57.108 2.640 - 2.653: 97.1201% ( 14) 00:12:57.108 2.653 - 2.667: 97.1463% ( 5) 00:12:57.108 2.667 - 2.680: 97.1883% ( 8) 00:12:57.108 2.680 - 2.693: 97.2355% ( 9) 00:12:57.108 2.693 - 2.707: 97.2722% ( 7) 00:12:57.108 2.707 - 2.720: 97.3247% ( 10) 00:12:57.108 2.720 - 2.733: 97.3876% ( 12) 00:12:57.108 2.733 - 2.747: 97.4138% ( 5) 00:12:57.108 2.747 - 2.760: 97.4768% ( 12) 00:12:57.108 2.760 - 2.773: 97.5188% ( 8) 00:12:57.108 2.773 - 2.787: 97.5712% ( 10) 00:12:57.108 2.787 - 2.800: 97.6394% ( 13) 00:12:57.108 2.800 - 2.813: 97.7024% ( 12) 00:12:57.108 2.813 - 2.827: 97.7338% ( 6) 00:12:57.108 2.827 - 2.840: 97.7653% ( 6) 00:12:57.108 2.840 - 2.853: 97.8020% ( 7) 00:12:57.108 2.853 - 2.867: 97.8492% ( 9) 00:12:57.108 2.867 - 2.880: 97.8807% ( 6) 00:12:57.108 2.880 - 2.893: 97.9332% ( 10) 00:12:57.108 2.893 - 2.907: 97.9594% ( 5) 00:12:57.108 2.907 - 2.920: 97.9804% ( 4) 00:12:57.108 2.920 - 2.933: 98.0171% ( 7) 00:12:57.108 2.933 - 2.947: 98.0433% ( 5) 00:12:57.108 2.947 - 2.960: 98.0853% ( 8) 00:12:57.108 2.960 - 2.973: 98.1378% ( 10) 00:12:57.108 2.973 - 2.987: 98.1745% ( 7) 00:12:57.108 2.987 - 3.000: 98.2112% ( 7) 00:12:57.108 3.000 - 3.013: 98.2427% ( 6) 00:12:57.108 3.013 - 3.027: 98.2846% ( 8) 00:12:57.108 3.027 - 3.040: 98.3214% ( 7) 00:12:57.108 3.040 - 3.053: 98.3791% ( 11) 00:12:57.108 3.053 - 3.067: 98.4420% ( 12) 00:12:57.108 3.067 - 3.080: 98.4735% ( 6) 00:12:57.108 3.080 - 3.093: 98.5050% ( 6) 00:12:57.108 3.093 - 3.107: 98.5522% ( 9) 00:12:57.108 3.107 - 3.120: 98.5732% ( 4) 00:12:57.108 3.120 - 3.133: 98.6466% ( 14) 00:12:57.108 3.133 - 3.147: 98.6676% ( 4) 00:12:57.108 3.147 - 3.160: 98.7200% ( 10) 00:12:57.108 3.160 - 3.173: 98.7358% ( 3) 00:12:57.108 3.173 - 3.187: 98.7725% ( 7) 00:12:57.108 3.187 - 3.200: 98.8302% ( 11) 00:12:57.108 3.200 - 3.213: 98.8774% ( 9) 00:12:57.108 3.213 - 3.227: 98.9299% ( 10) 00:12:57.108 3.227 - 3.240: 98.9561% ( 5) 00:12:57.108 3.240 - 3.253: 98.9928% ( 7) 00:12:57.108 3.253 - 3.267: 99.0243% ( 6) 00:12:57.108 3.267 - 3.280: 99.0558% ( 6) 00:12:57.108 3.280 - 3.293: 99.0767% ( 4) 00:12:57.108 3.293 - 3.307: 99.1030% ( 5) 00:12:57.108 3.307 - 3.320: 99.1187% ( 3) 00:12:57.108 3.320 - 3.333: 99.1449% ( 5) 00:12:57.108 3.333 - 3.347: 99.1712% ( 5) 00:12:57.108 3.347 - 3.360: 99.1922% ( 4) 00:12:57.108 3.360 - 3.373: 99.2026% ( 2) 00:12:57.109 3.373 - 3.387: 99.2079% ( 1) 00:12:57.109 3.387 - 3.400: 99.2131% ( 1) 00:12:57.109 3.400 - 3.413: 99.2289% ( 3) 00:12:57.109 3.413 - 3.440: 99.2394% ( 2) 00:12:57.109 3.520 - 3.547: 99.2446% ( 1) 00:12:57.109 3.600 - 3.627: 99.2499% ( 1) 00:12:57.109 3.680 - 3.707: 99.2603% ( 2) 00:12:57.109 3.840 - 3.867: 99.2656% ( 1) 00:12:57.109 4.800 - 4.827: 99.2708% ( 1) 00:12:57.109 5.200 - 5.227: 99.2761% ( 1) 00:12:57.109 5.387 - 5.413: 99.2813% ( 1) 00:12:57.109 5.920 - 5.947: 99.2918% ( 2) 00:12:57.109 5.947 - 5.973: 99.2971% ( 1) 00:12:57.109 6.027 - 6.053: 99.3023% ( 1) 00:12:57.109 6.053 - 6.080: 99.3128% ( 2) 00:12:57.109 6.107 - 6.133: 99.3181% ( 1) 00:12:57.109 6.133 - 6.160: 99.3285% ( 2) 00:12:57.109 6.160 - 6.187: 99.3338% ( 1) 00:12:57.109 6.240 - 6.267: 99.3390% ( 1) 00:12:57.109 6.267 - 6.293: 99.3443% ( 1) 00:12:57.109 6.427 - 6.453: 99.3495% ( 1) 00:12:57.109 6.453 - 6.480: 99.3600% ( 2) 00:12:57.109 6.533 - 6.560: 99.3705% ( 2) 00:12:57.109 6.560 - 6.587: 99.3810% ( 2) 00:12:57.109 6.747 - 6.773: 99.3862% ( 1) 00:12:57.109 6.773 - 6.800: 99.3915% ( 1) 00:12:57.109 6.800 - 6.827: 99.3967% ( 1) 00:12:57.109 6.827 - 6.880: 99.4020% ( 1) 00:12:57.109 6.880 - 6.933: 99.4072% ( 1) 00:12:57.109 7.040 - 7.093: 99.4125% ( 1) 00:12:57.109 7.093 - 7.147: 99.4177% ( 1) 00:12:57.109 7.147 - 7.200: 99.4230% ( 1) 00:12:57.109 7.253 - 7.307: 99.4282% ( 1) 00:12:57.109 7.360 - 7.413: 99.4335% ( 1) 00:12:57.109 7.413 - 7.467: 99.4387% ( 1) 00:12:57.109 7.627 - 7.680: 99.4492% ( 2) 00:12:57.109 7.733 - 7.787: 99.4544% ( 1) 00:12:57.109 7.840 - 7.893: 99.4597% ( 1) 00:12:57.109 7.893 - 7.947: 99.4649% ( 1) 00:12:57.109 7.947 - 8.000: 99.4754% ( 2) 00:12:57.109 8.107 - 8.160: 99.4807% ( 1) 00:12:57.109 8.320 - 8.373: 99.4859% ( 1) 00:12:57.109 8.373 - 8.427: 99.4912% ( 1) 00:12:57.109 8.427 - 8.480: 99.5069% ( 3) 00:12:57.109 8.587 - 8.640: 99.5121% ( 1) 00:12:57.109 8.640 - 8.693: 99.5174% ( 1) 00:12:57.109 8.693 - 8.747: 99.5226% ( 1) 00:12:57.109 9.120 - 9.173: 99.5279% ( 1) 00:12:57.109 9.227 - 9.280: 99.5331% ( 1) 00:12:57.109 9.653 - 9.707: 99.5384% ( 1) 00:12:57.109 10.240 - 10.293: 99.5489% ( 2) 00:12:57.109 10.720 - 10.773: 99.5541% ( 1) 00:12:57.109 12.587 - 12.640: 99.5594% ( 1) 00:12:57.109 13.867 - 13.973: 99.5698% ( 2) 00:12:57.109 14.720 - 14.827: 99.5751% ( 1) 00:12:57.109 14.827 - 14.933: 99.5803% ( 1) 00:12:57.109 15.253 - 15.360: 99.5856% ( 1) 00:12:57.109 15.680 - 15.787: 99.5908% ( 1) 00:12:57.109 44.160 - 44.373: 99.5961% ( 1) 00:12:57.109 2048.000 - 2061.653: 99.6013% ( 1) 00:12:57.109 3140.267 - 3153.920: 99.6066% ( 1) 00:12:57.109 3986.773 - 4014.080: 9[2024-07-16 00:24:10.735286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.369 9.9948% ( 74) 00:12:57.369 4969.813 - 4997.120: 100.0000% ( 1) 00:12:57.369 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.369 [ 00:12:57.369 { 00:12:57.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.369 "subtype": "Discovery", 00:12:57.369 "listen_addresses": [], 00:12:57.369 "allow_any_host": true, 00:12:57.369 "hosts": [] 00:12:57.369 }, 00:12:57.369 { 00:12:57.369 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.369 "subtype": "NVMe", 00:12:57.369 "listen_addresses": [ 00:12:57.369 { 00:12:57.369 "trtype": "VFIOUSER", 00:12:57.369 "adrfam": "IPv4", 00:12:57.369 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.369 "trsvcid": "0" 00:12:57.369 } 00:12:57.369 ], 00:12:57.369 "allow_any_host": true, 00:12:57.369 "hosts": [], 00:12:57.369 "serial_number": "SPDK1", 00:12:57.369 "model_number": "SPDK bdev Controller", 00:12:57.369 "max_namespaces": 32, 00:12:57.369 "min_cntlid": 1, 00:12:57.369 "max_cntlid": 65519, 00:12:57.369 "namespaces": [ 00:12:57.369 { 00:12:57.369 "nsid": 1, 00:12:57.369 "bdev_name": "Malloc1", 00:12:57.369 "name": "Malloc1", 00:12:57.369 "nguid": "6A60092579594D6592743B010670D04B", 00:12:57.369 "uuid": "6a600925-7959-4d65-9274-3b010670d04b" 00:12:57.369 } 00:12:57.369 ] 00:12:57.369 }, 00:12:57.369 { 00:12:57.369 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.369 "subtype": "NVMe", 00:12:57.369 "listen_addresses": [ 00:12:57.369 { 00:12:57.369 "trtype": "VFIOUSER", 00:12:57.369 "adrfam": "IPv4", 00:12:57.369 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.369 "trsvcid": "0" 00:12:57.369 } 00:12:57.369 ], 00:12:57.369 "allow_any_host": true, 00:12:57.369 "hosts": [], 00:12:57.369 "serial_number": "SPDK2", 00:12:57.369 "model_number": "SPDK bdev Controller", 00:12:57.369 "max_namespaces": 32, 00:12:57.369 "min_cntlid": 1, 00:12:57.369 "max_cntlid": 65519, 00:12:57.369 "namespaces": [ 00:12:57.369 { 00:12:57.369 "nsid": 1, 00:12:57.369 "bdev_name": "Malloc2", 00:12:57.369 "name": "Malloc2", 00:12:57.369 "nguid": "E4381BE4FF9849008BD41592D5181F13", 00:12:57.369 "uuid": "e4381be4-ff98-4900-8bd4-1592d5181f13" 00:12:57.369 } 00:12:57.369 ] 00:12:57.369 } 00:12:57.369 ] 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=996281 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:57.369 00:24:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:57.630 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.630 Malloc3 00:12:57.630 [2024-07-16 00:24:11.140452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.630 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:57.890 [2024-07-16 00:24:11.303548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.890 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.890 Asynchronous Event Request test 00:12:57.890 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.890 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.890 Registering asynchronous event callbacks... 00:12:57.890 Starting namespace attribute notice tests for all controllers... 00:12:57.890 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:57.890 aer_cb - Changed Namespace 00:12:57.890 Cleaning up... 00:12:57.890 [ 00:12:57.890 { 00:12:57.890 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.890 "subtype": "Discovery", 00:12:57.890 "listen_addresses": [], 00:12:57.890 "allow_any_host": true, 00:12:57.890 "hosts": [] 00:12:57.890 }, 00:12:57.890 { 00:12:57.890 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.890 "subtype": "NVMe", 00:12:57.890 "listen_addresses": [ 00:12:57.890 { 00:12:57.890 "trtype": "VFIOUSER", 00:12:57.890 "adrfam": "IPv4", 00:12:57.890 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.890 "trsvcid": "0" 00:12:57.890 } 00:12:57.890 ], 00:12:57.890 "allow_any_host": true, 00:12:57.890 "hosts": [], 00:12:57.890 "serial_number": "SPDK1", 00:12:57.890 "model_number": "SPDK bdev Controller", 00:12:57.890 "max_namespaces": 32, 00:12:57.890 "min_cntlid": 1, 00:12:57.890 "max_cntlid": 65519, 00:12:57.890 "namespaces": [ 00:12:57.890 { 00:12:57.890 "nsid": 1, 00:12:57.890 "bdev_name": "Malloc1", 00:12:57.890 "name": "Malloc1", 00:12:57.890 "nguid": "6A60092579594D6592743B010670D04B", 00:12:57.890 "uuid": "6a600925-7959-4d65-9274-3b010670d04b" 00:12:57.890 }, 00:12:57.890 { 00:12:57.890 "nsid": 2, 00:12:57.890 "bdev_name": "Malloc3", 00:12:57.890 "name": "Malloc3", 00:12:57.890 "nguid": "9843DFA8CA6E49578F3A1DF2AF82C6BB", 00:12:57.890 "uuid": "9843dfa8-ca6e-4957-8f3a-1df2af82c6bb" 00:12:57.890 } 00:12:57.890 ] 00:12:57.890 }, 00:12:57.890 { 00:12:57.890 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.890 "subtype": "NVMe", 00:12:57.890 "listen_addresses": [ 00:12:57.890 { 00:12:57.890 "trtype": "VFIOUSER", 00:12:57.890 "adrfam": "IPv4", 00:12:57.890 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.890 "trsvcid": "0" 00:12:57.890 } 00:12:57.890 ], 00:12:57.890 "allow_any_host": true, 00:12:57.890 "hosts": [], 00:12:57.890 "serial_number": "SPDK2", 00:12:57.890 "model_number": "SPDK bdev Controller", 00:12:57.890 "max_namespaces": 32, 00:12:57.890 "min_cntlid": 1, 00:12:57.890 "max_cntlid": 65519, 00:12:57.890 "namespaces": [ 00:12:57.890 { 00:12:57.890 "nsid": 1, 00:12:57.890 "bdev_name": "Malloc2", 00:12:57.890 "name": "Malloc2", 00:12:57.890 "nguid": "E4381BE4FF9849008BD41592D5181F13", 00:12:57.890 "uuid": "e4381be4-ff98-4900-8bd4-1592d5181f13" 00:12:57.890 } 00:12:57.890 ] 00:12:57.890 } 00:12:57.890 ] 00:12:57.891 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 996281 00:12:57.891 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:57.891 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:57.891 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:57.891 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:58.153 [2024-07-16 00:24:11.524820] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:58.153 [2024-07-16 00:24:11.524868] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996421 ] 00:12:58.153 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.153 [2024-07-16 00:24:11.564322] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:58.153 [2024-07-16 00:24:11.566547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:58.153 [2024-07-16 00:24:11.566568] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f14af71f000 00:12:58.153 [2024-07-16 00:24:11.567543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.568548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.569549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.570555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.571562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.572573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.573584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.574588] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:58.153 [2024-07-16 00:24:11.575593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:58.153 [2024-07-16 00:24:11.575603] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f14af714000 00:12:58.153 [2024-07-16 00:24:11.576929] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:58.153 [2024-07-16 00:24:11.596399] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:58.153 [2024-07-16 00:24:11.596422] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:58.153 [2024-07-16 00:24:11.598471] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:58.153 [2024-07-16 00:24:11.598518] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:58.153 [2024-07-16 00:24:11.598602] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:58.153 [2024-07-16 00:24:11.598614] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:58.153 [2024-07-16 00:24:11.598619] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:58.153 [2024-07-16 00:24:11.599479] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:58.153 [2024-07-16 00:24:11.599489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:58.153 [2024-07-16 00:24:11.599496] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:58.153 [2024-07-16 00:24:11.600484] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:58.153 [2024-07-16 00:24:11.600493] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:58.153 [2024-07-16 00:24:11.600500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:58.153 [2024-07-16 00:24:11.601495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:58.153 [2024-07-16 00:24:11.601504] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:58.153 [2024-07-16 00:24:11.602499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:58.153 [2024-07-16 00:24:11.602507] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:58.153 [2024-07-16 00:24:11.602512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:58.153 [2024-07-16 00:24:11.602522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:58.153 [2024-07-16 00:24:11.602627] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:58.153 [2024-07-16 00:24:11.602632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:58.154 [2024-07-16 00:24:11.602637] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:58.154 [2024-07-16 00:24:11.603509] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:58.154 [2024-07-16 00:24:11.604519] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:58.154 [2024-07-16 00:24:11.605522] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:58.154 [2024-07-16 00:24:11.606523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:58.154 [2024-07-16 00:24:11.606562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:58.154 [2024-07-16 00:24:11.607530] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:58.154 [2024-07-16 00:24:11.607539] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:58.154 [2024-07-16 00:24:11.607543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.607565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:58.154 [2024-07-16 00:24:11.607572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.607586] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:58.154 [2024-07-16 00:24:11.607591] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:58.154 [2024-07-16 00:24:11.607603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.618238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.618249] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:58.154 [2024-07-16 00:24:11.618254] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:58.154 [2024-07-16 00:24:11.618258] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:58.154 [2024-07-16 00:24:11.618263] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:58.154 [2024-07-16 00:24:11.618267] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:58.154 [2024-07-16 00:24:11.618272] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:58.154 [2024-07-16 00:24:11.618276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.618286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.618299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.626238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.626250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.154 [2024-07-16 00:24:11.626259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.154 [2024-07-16 00:24:11.626267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.154 [2024-07-16 00:24:11.626276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.154 [2024-07-16 00:24:11.626280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.626289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.626298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.634248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.634256] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:58.154 [2024-07-16 00:24:11.634261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.634269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.634275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.634283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.642236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.642301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.642309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.642317] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:58.154 [2024-07-16 00:24:11.642321] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:58.154 [2024-07-16 00:24:11.642328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.650235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.650246] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:58.154 [2024-07-16 00:24:11.650256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.650266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.650273] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:58.154 [2024-07-16 00:24:11.650277] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:58.154 [2024-07-16 00:24:11.650283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.658235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.658249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.658256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.658263] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:58.154 [2024-07-16 00:24:11.658268] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:58.154 [2024-07-16 00:24:11.658274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.663275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.663285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.663291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.663299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.663305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.663310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.663315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.663320] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:58.154 [2024-07-16 00:24:11.663324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:58.154 [2024-07-16 00:24:11.663329] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:58.154 [2024-07-16 00:24:11.663345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.674237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.674251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.682247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.690234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.690247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.698235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:58.154 [2024-07-16 00:24:11.698251] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:58.154 [2024-07-16 00:24:11.698256] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:58.154 [2024-07-16 00:24:11.698260] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:58.154 [2024-07-16 00:24:11.698263] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:58.154 [2024-07-16 00:24:11.698270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:58.154 [2024-07-16 00:24:11.698277] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:58.154 [2024-07-16 00:24:11.698282] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:58.154 [2024-07-16 00:24:11.698288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.698295] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:58.154 [2024-07-16 00:24:11.698299] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:58.154 [2024-07-16 00:24:11.698305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:58.154 [2024-07-16 00:24:11.698313] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:58.155 [2024-07-16 00:24:11.698317] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:58.155 [2024-07-16 00:24:11.698323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:58.155 [2024-07-16 00:24:11.706237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:58.155 [2024-07-16 00:24:11.706251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:58.155 [2024-07-16 00:24:11.706261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:58.155 [2024-07-16 00:24:11.706268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:58.155 ===================================================== 00:12:58.155 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:58.155 ===================================================== 00:12:58.155 Controller Capabilities/Features 00:12:58.155 ================================ 00:12:58.155 Vendor ID: 4e58 00:12:58.155 Subsystem Vendor ID: 4e58 00:12:58.155 Serial Number: SPDK2 00:12:58.155 Model Number: SPDK bdev Controller 00:12:58.155 Firmware Version: 24.09 00:12:58.155 Recommended Arb Burst: 6 00:12:58.155 IEEE OUI Identifier: 8d 6b 50 00:12:58.155 Multi-path I/O 00:12:58.155 May have multiple subsystem ports: Yes 00:12:58.155 May have multiple controllers: Yes 00:12:58.155 Associated with SR-IOV VF: No 00:12:58.155 Max Data Transfer Size: 131072 00:12:58.155 Max Number of Namespaces: 32 00:12:58.155 Max Number of I/O Queues: 127 00:12:58.155 NVMe Specification Version (VS): 1.3 00:12:58.155 NVMe Specification Version (Identify): 1.3 00:12:58.155 Maximum Queue Entries: 256 00:12:58.155 Contiguous Queues Required: Yes 00:12:58.155 Arbitration Mechanisms Supported 00:12:58.155 Weighted Round Robin: Not Supported 00:12:58.155 Vendor Specific: Not Supported 00:12:58.155 Reset Timeout: 15000 ms 00:12:58.155 Doorbell Stride: 4 bytes 00:12:58.155 NVM Subsystem Reset: Not Supported 00:12:58.155 Command Sets Supported 00:12:58.155 NVM Command Set: Supported 00:12:58.155 Boot Partition: Not Supported 00:12:58.155 Memory Page Size Minimum: 4096 bytes 00:12:58.155 Memory Page Size Maximum: 4096 bytes 00:12:58.155 Persistent Memory Region: Not Supported 00:12:58.155 Optional Asynchronous Events Supported 00:12:58.155 Namespace Attribute Notices: Supported 00:12:58.155 Firmware Activation Notices: Not Supported 00:12:58.155 ANA Change Notices: Not Supported 00:12:58.155 PLE Aggregate Log Change Notices: Not Supported 00:12:58.155 LBA Status Info Alert Notices: Not Supported 00:12:58.155 EGE Aggregate Log Change Notices: Not Supported 00:12:58.155 Normal NVM Subsystem Shutdown event: Not Supported 00:12:58.155 Zone Descriptor Change Notices: Not Supported 00:12:58.155 Discovery Log Change Notices: Not Supported 00:12:58.155 Controller Attributes 00:12:58.155 128-bit Host Identifier: Supported 00:12:58.155 Non-Operational Permissive Mode: Not Supported 00:12:58.155 NVM Sets: Not Supported 00:12:58.155 Read Recovery Levels: Not Supported 00:12:58.155 Endurance Groups: Not Supported 00:12:58.155 Predictable Latency Mode: Not Supported 00:12:58.155 Traffic Based Keep ALive: Not Supported 00:12:58.155 Namespace Granularity: Not Supported 00:12:58.155 SQ Associations: Not Supported 00:12:58.155 UUID List: Not Supported 00:12:58.155 Multi-Domain Subsystem: Not Supported 00:12:58.155 Fixed Capacity Management: Not Supported 00:12:58.155 Variable Capacity Management: Not Supported 00:12:58.155 Delete Endurance Group: Not Supported 00:12:58.155 Delete NVM Set: Not Supported 00:12:58.155 Extended LBA Formats Supported: Not Supported 00:12:58.155 Flexible Data Placement Supported: Not Supported 00:12:58.155 00:12:58.155 Controller Memory Buffer Support 00:12:58.155 ================================ 00:12:58.155 Supported: No 00:12:58.155 00:12:58.155 Persistent Memory Region Support 00:12:58.155 ================================ 00:12:58.155 Supported: No 00:12:58.155 00:12:58.155 Admin Command Set Attributes 00:12:58.155 ============================ 00:12:58.155 Security Send/Receive: Not Supported 00:12:58.155 Format NVM: Not Supported 00:12:58.155 Firmware Activate/Download: Not Supported 00:12:58.155 Namespace Management: Not Supported 00:12:58.155 Device Self-Test: Not Supported 00:12:58.155 Directives: Not Supported 00:12:58.155 NVMe-MI: Not Supported 00:12:58.155 Virtualization Management: Not Supported 00:12:58.155 Doorbell Buffer Config: Not Supported 00:12:58.155 Get LBA Status Capability: Not Supported 00:12:58.155 Command & Feature Lockdown Capability: Not Supported 00:12:58.155 Abort Command Limit: 4 00:12:58.155 Async Event Request Limit: 4 00:12:58.155 Number of Firmware Slots: N/A 00:12:58.155 Firmware Slot 1 Read-Only: N/A 00:12:58.155 Firmware Activation Without Reset: N/A 00:12:58.155 Multiple Update Detection Support: N/A 00:12:58.155 Firmware Update Granularity: No Information Provided 00:12:58.155 Per-Namespace SMART Log: No 00:12:58.155 Asymmetric Namespace Access Log Page: Not Supported 00:12:58.155 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:58.155 Command Effects Log Page: Supported 00:12:58.155 Get Log Page Extended Data: Supported 00:12:58.155 Telemetry Log Pages: Not Supported 00:12:58.155 Persistent Event Log Pages: Not Supported 00:12:58.155 Supported Log Pages Log Page: May Support 00:12:58.155 Commands Supported & Effects Log Page: Not Supported 00:12:58.155 Feature Identifiers & Effects Log Page:May Support 00:12:58.155 NVMe-MI Commands & Effects Log Page: May Support 00:12:58.155 Data Area 4 for Telemetry Log: Not Supported 00:12:58.155 Error Log Page Entries Supported: 128 00:12:58.155 Keep Alive: Supported 00:12:58.155 Keep Alive Granularity: 10000 ms 00:12:58.155 00:12:58.155 NVM Command Set Attributes 00:12:58.155 ========================== 00:12:58.155 Submission Queue Entry Size 00:12:58.155 Max: 64 00:12:58.155 Min: 64 00:12:58.155 Completion Queue Entry Size 00:12:58.155 Max: 16 00:12:58.155 Min: 16 00:12:58.155 Number of Namespaces: 32 00:12:58.155 Compare Command: Supported 00:12:58.155 Write Uncorrectable Command: Not Supported 00:12:58.155 Dataset Management Command: Supported 00:12:58.155 Write Zeroes Command: Supported 00:12:58.155 Set Features Save Field: Not Supported 00:12:58.155 Reservations: Not Supported 00:12:58.155 Timestamp: Not Supported 00:12:58.155 Copy: Supported 00:12:58.155 Volatile Write Cache: Present 00:12:58.155 Atomic Write Unit (Normal): 1 00:12:58.155 Atomic Write Unit (PFail): 1 00:12:58.155 Atomic Compare & Write Unit: 1 00:12:58.155 Fused Compare & Write: Supported 00:12:58.155 Scatter-Gather List 00:12:58.155 SGL Command Set: Supported (Dword aligned) 00:12:58.155 SGL Keyed: Not Supported 00:12:58.155 SGL Bit Bucket Descriptor: Not Supported 00:12:58.155 SGL Metadata Pointer: Not Supported 00:12:58.155 Oversized SGL: Not Supported 00:12:58.155 SGL Metadata Address: Not Supported 00:12:58.155 SGL Offset: Not Supported 00:12:58.155 Transport SGL Data Block: Not Supported 00:12:58.155 Replay Protected Memory Block: Not Supported 00:12:58.155 00:12:58.155 Firmware Slot Information 00:12:58.155 ========================= 00:12:58.155 Active slot: 1 00:12:58.155 Slot 1 Firmware Revision: 24.09 00:12:58.155 00:12:58.155 00:12:58.155 Commands Supported and Effects 00:12:58.155 ============================== 00:12:58.155 Admin Commands 00:12:58.155 -------------- 00:12:58.155 Get Log Page (02h): Supported 00:12:58.155 Identify (06h): Supported 00:12:58.155 Abort (08h): Supported 00:12:58.155 Set Features (09h): Supported 00:12:58.155 Get Features (0Ah): Supported 00:12:58.155 Asynchronous Event Request (0Ch): Supported 00:12:58.155 Keep Alive (18h): Supported 00:12:58.155 I/O Commands 00:12:58.155 ------------ 00:12:58.155 Flush (00h): Supported LBA-Change 00:12:58.155 Write (01h): Supported LBA-Change 00:12:58.155 Read (02h): Supported 00:12:58.155 Compare (05h): Supported 00:12:58.155 Write Zeroes (08h): Supported LBA-Change 00:12:58.155 Dataset Management (09h): Supported LBA-Change 00:12:58.155 Copy (19h): Supported LBA-Change 00:12:58.155 00:12:58.155 Error Log 00:12:58.155 ========= 00:12:58.155 00:12:58.155 Arbitration 00:12:58.155 =========== 00:12:58.155 Arbitration Burst: 1 00:12:58.155 00:12:58.155 Power Management 00:12:58.155 ================ 00:12:58.155 Number of Power States: 1 00:12:58.155 Current Power State: Power State #0 00:12:58.155 Power State #0: 00:12:58.155 Max Power: 0.00 W 00:12:58.155 Non-Operational State: Operational 00:12:58.155 Entry Latency: Not Reported 00:12:58.155 Exit Latency: Not Reported 00:12:58.155 Relative Read Throughput: 0 00:12:58.155 Relative Read Latency: 0 00:12:58.155 Relative Write Throughput: 0 00:12:58.155 Relative Write Latency: 0 00:12:58.155 Idle Power: Not Reported 00:12:58.155 Active Power: Not Reported 00:12:58.155 Non-Operational Permissive Mode: Not Supported 00:12:58.155 00:12:58.155 Health Information 00:12:58.155 ================== 00:12:58.155 Critical Warnings: 00:12:58.155 Available Spare Space: OK 00:12:58.155 Temperature: OK 00:12:58.155 Device Reliability: OK 00:12:58.155 Read Only: No 00:12:58.155 Volatile Memory Backup: OK 00:12:58.155 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:58.155 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:58.156 Available Spare: 0% 00:12:58.156 Available Sp[2024-07-16 00:24:11.706365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:58.156 [2024-07-16 00:24:11.714236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:58.156 [2024-07-16 00:24:11.714274] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:58.156 [2024-07-16 00:24:11.714284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.156 [2024-07-16 00:24:11.714290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.156 [2024-07-16 00:24:11.714297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.156 [2024-07-16 00:24:11.714303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.156 [2024-07-16 00:24:11.714361] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:58.156 [2024-07-16 00:24:11.714371] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:58.156 [2024-07-16 00:24:11.715368] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.156 [2024-07-16 00:24:11.715416] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:58.156 [2024-07-16 00:24:11.715422] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:58.156 [2024-07-16 00:24:11.716372] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:58.156 [2024-07-16 00:24:11.716384] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:58.156 [2024-07-16 00:24:11.716432] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:58.156 [2024-07-16 00:24:11.717904] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:58.156 are Threshold: 0% 00:12:58.156 Life Percentage Used: 0% 00:12:58.156 Data Units Read: 0 00:12:58.156 Data Units Written: 0 00:12:58.156 Host Read Commands: 0 00:12:58.156 Host Write Commands: 0 00:12:58.156 Controller Busy Time: 0 minutes 00:12:58.156 Power Cycles: 0 00:12:58.156 Power On Hours: 0 hours 00:12:58.156 Unsafe Shutdowns: 0 00:12:58.156 Unrecoverable Media Errors: 0 00:12:58.156 Lifetime Error Log Entries: 0 00:12:58.156 Warning Temperature Time: 0 minutes 00:12:58.156 Critical Temperature Time: 0 minutes 00:12:58.156 00:12:58.156 Number of Queues 00:12:58.156 ================ 00:12:58.156 Number of I/O Submission Queues: 127 00:12:58.156 Number of I/O Completion Queues: 127 00:12:58.156 00:12:58.156 Active Namespaces 00:12:58.156 ================= 00:12:58.156 Namespace ID:1 00:12:58.156 Error Recovery Timeout: Unlimited 00:12:58.156 Command Set Identifier: NVM (00h) 00:12:58.156 Deallocate: Supported 00:12:58.156 Deallocated/Unwritten Error: Not Supported 00:12:58.156 Deallocated Read Value: Unknown 00:12:58.156 Deallocate in Write Zeroes: Not Supported 00:12:58.156 Deallocated Guard Field: 0xFFFF 00:12:58.156 Flush: Supported 00:12:58.156 Reservation: Supported 00:12:58.156 Namespace Sharing Capabilities: Multiple Controllers 00:12:58.156 Size (in LBAs): 131072 (0GiB) 00:12:58.156 Capacity (in LBAs): 131072 (0GiB) 00:12:58.156 Utilization (in LBAs): 131072 (0GiB) 00:12:58.156 NGUID: E4381BE4FF9849008BD41592D5181F13 00:12:58.156 UUID: e4381be4-ff98-4900-8bd4-1592d5181f13 00:12:58.156 Thin Provisioning: Not Supported 00:12:58.156 Per-NS Atomic Units: Yes 00:12:58.156 Atomic Boundary Size (Normal): 0 00:12:58.156 Atomic Boundary Size (PFail): 0 00:12:58.156 Atomic Boundary Offset: 0 00:12:58.156 Maximum Single Source Range Length: 65535 00:12:58.156 Maximum Copy Length: 65535 00:12:58.156 Maximum Source Range Count: 1 00:12:58.156 NGUID/EUI64 Never Reused: No 00:12:58.156 Namespace Write Protected: No 00:12:58.156 Number of LBA Formats: 1 00:12:58.156 Current LBA Format: LBA Format #00 00:12:58.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:58.156 00:12:58.156 00:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:58.417 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.417 [2024-07-16 00:24:11.913610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.699 Initializing NVMe Controllers 00:13:03.699 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.699 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:03.699 Initialization complete. Launching workers. 00:13:03.699 ======================================================== 00:13:03.699 Latency(us) 00:13:03.699 Device Information : IOPS MiB/s Average min max 00:13:03.699 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39964.72 156.11 3202.70 843.41 7786.71 00:13:03.699 ======================================================== 00:13:03.699 Total : 39964.72 156.11 3202.70 843.41 7786.71 00:13:03.699 00:13:03.699 [2024-07-16 00:24:17.018430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.699 00:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:03.699 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.699 [2024-07-16 00:24:17.197965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.975 Initializing NVMe Controllers 00:13:08.975 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.975 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:08.975 Initialization complete. Launching workers. 00:13:08.976 ======================================================== 00:13:08.976 Latency(us) 00:13:08.976 Device Information : IOPS MiB/s Average min max 00:13:08.976 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35154.71 137.32 3640.69 1108.37 7459.14 00:13:08.976 ======================================================== 00:13:08.976 Total : 35154.71 137.32 3640.69 1108.37 7459.14 00:13:08.976 00:13:08.976 [2024-07-16 00:24:22.220223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.976 00:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:08.976 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.976 [2024-07-16 00:24:22.419376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.289 [2024-07-16 00:24:27.553315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.289 Initializing NVMe Controllers 00:13:14.289 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:14.289 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:14.289 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:14.289 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:14.289 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:14.289 Initialization complete. Launching workers. 00:13:14.289 Starting thread on core 2 00:13:14.289 Starting thread on core 3 00:13:14.289 Starting thread on core 1 00:13:14.289 00:24:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:14.289 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.289 [2024-07-16 00:24:27.822691] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.589 [2024-07-16 00:24:30.906383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.589 Initializing NVMe Controllers 00:13:17.589 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.589 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.589 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:17.589 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:17.589 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:17.589 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:17.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:17.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:17.589 Initialization complete. Launching workers. 00:13:17.589 Starting thread on core 1 with urgent priority queue 00:13:17.589 Starting thread on core 2 with urgent priority queue 00:13:17.589 Starting thread on core 3 with urgent priority queue 00:13:17.589 Starting thread on core 0 with urgent priority queue 00:13:17.589 SPDK bdev Controller (SPDK2 ) core 0: 9865.33 IO/s 10.14 secs/100000 ios 00:13:17.589 SPDK bdev Controller (SPDK2 ) core 1: 7093.67 IO/s 14.10 secs/100000 ios 00:13:17.589 SPDK bdev Controller (SPDK2 ) core 2: 7084.33 IO/s 14.12 secs/100000 ios 00:13:17.589 SPDK bdev Controller (SPDK2 ) core 3: 8401.67 IO/s 11.90 secs/100000 ios 00:13:17.589 ======================================================== 00:13:17.589 00:13:17.589 00:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:17.589 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.589 [2024-07-16 00:24:31.174344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.589 Initializing NVMe Controllers 00:13:17.589 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.589 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.589 Namespace ID: 1 size: 0GB 00:13:17.589 Initialization complete. 00:13:17.589 INFO: using host memory buffer for IO 00:13:17.589 Hello world! 00:13:17.589 [2024-07-16 00:24:31.185422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.851 00:24:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:17.851 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.851 [2024-07-16 00:24:31.455158] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.333 Initializing NVMe Controllers 00:13:19.333 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.333 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.333 Initialization complete. Launching workers. 00:13:19.333 submit (in ns) avg, min, max = 7877.6, 3935.8, 4001902.5 00:13:19.333 complete (in ns) avg, min, max = 19186.2, 2390.8, 3999765.8 00:13:19.333 00:13:19.333 Submit histogram 00:13:19.333 ================ 00:13:19.333 Range in us Cumulative Count 00:13:19.333 3.920 - 3.947: 0.1255% ( 24) 00:13:19.333 3.947 - 3.973: 1.2755% ( 220) 00:13:19.333 3.973 - 4.000: 4.9294% ( 699) 00:13:19.333 4.000 - 4.027: 12.7862% ( 1503) 00:13:19.333 4.027 - 4.053: 23.5337% ( 2056) 00:13:19.333 4.053 - 4.080: 34.9137% ( 2177) 00:13:19.333 4.080 - 4.107: 47.6163% ( 2430) 00:13:19.333 4.107 - 4.133: 65.8756% ( 3493) 00:13:19.333 4.133 - 4.160: 80.8782% ( 2870) 00:13:19.333 4.160 - 4.187: 90.9514% ( 1927) 00:13:19.333 4.187 - 4.213: 96.4506% ( 1052) 00:13:19.333 4.213 - 4.240: 98.3638% ( 366) 00:13:19.333 4.240 - 4.267: 99.0591% ( 133) 00:13:19.333 4.267 - 4.293: 99.2734% ( 41) 00:13:19.333 4.293 - 4.320: 99.3466% ( 14) 00:13:19.333 4.320 - 4.347: 99.3623% ( 3) 00:13:19.333 4.347 - 4.373: 99.3884% ( 5) 00:13:19.333 4.373 - 4.400: 99.3988% ( 2) 00:13:19.333 4.400 - 4.427: 99.4250% ( 5) 00:13:19.333 4.480 - 4.507: 99.4302% ( 1) 00:13:19.333 4.693 - 4.720: 99.4354% ( 1) 00:13:19.333 4.720 - 4.747: 99.4407% ( 1) 00:13:19.333 4.827 - 4.853: 99.4459% ( 1) 00:13:19.333 4.853 - 4.880: 99.4511% ( 1) 00:13:19.333 4.880 - 4.907: 99.4616% ( 2) 00:13:19.333 4.907 - 4.933: 99.4668% ( 1) 00:13:19.333 4.960 - 4.987: 99.4720% ( 1) 00:13:19.333 4.987 - 5.013: 99.4773% ( 1) 00:13:19.333 5.093 - 5.120: 99.4825% ( 1) 00:13:19.333 5.227 - 5.253: 99.4982% ( 3) 00:13:19.333 5.280 - 5.307: 99.5034% ( 1) 00:13:19.333 5.413 - 5.440: 99.5086% ( 1) 00:13:19.333 5.493 - 5.520: 99.5139% ( 1) 00:13:19.333 5.573 - 5.600: 99.5191% ( 1) 00:13:19.333 5.680 - 5.707: 99.5243% ( 1) 00:13:19.333 5.707 - 5.733: 99.5295% ( 1) 00:13:19.333 5.733 - 5.760: 99.5348% ( 1) 00:13:19.333 5.813 - 5.840: 99.5400% ( 1) 00:13:19.333 6.000 - 6.027: 99.5452% ( 1) 00:13:19.333 6.027 - 6.053: 99.5504% ( 1) 00:13:19.333 6.107 - 6.133: 99.5557% ( 1) 00:13:19.333 6.133 - 6.160: 99.5609% ( 1) 00:13:19.333 6.160 - 6.187: 99.5714% ( 2) 00:13:19.333 6.187 - 6.213: 99.5818% ( 2) 00:13:19.333 6.267 - 6.293: 99.5975% ( 3) 00:13:19.333 6.293 - 6.320: 99.6027% ( 1) 00:13:19.333 6.320 - 6.347: 99.6132% ( 2) 00:13:19.333 6.427 - 6.453: 99.6184% ( 1) 00:13:19.333 6.453 - 6.480: 99.6236% ( 1) 00:13:19.333 6.480 - 6.507: 99.6289% ( 1) 00:13:19.333 6.507 - 6.533: 99.6393% ( 2) 00:13:19.333 6.560 - 6.587: 99.6445% ( 1) 00:13:19.333 6.587 - 6.613: 99.6550% ( 2) 00:13:19.333 6.613 - 6.640: 99.6602% ( 1) 00:13:19.333 6.827 - 6.880: 99.6654% ( 1) 00:13:19.333 6.933 - 6.987: 99.6759% ( 2) 00:13:19.333 7.093 - 7.147: 99.6864% ( 2) 00:13:19.333 7.147 - 7.200: 99.6916% ( 1) 00:13:19.333 7.200 - 7.253: 99.6968% ( 1) 00:13:19.333 7.253 - 7.307: 99.7020% ( 1) 00:13:19.333 7.307 - 7.360: 99.7125% ( 2) 00:13:19.333 7.413 - 7.467: 99.7177% ( 1) 00:13:19.333 7.573 - 7.627: 99.7282% ( 2) 00:13:19.333 7.627 - 7.680: 99.7334% ( 1) 00:13:19.333 7.733 - 7.787: 99.7386% ( 1) 00:13:19.333 7.787 - 7.840: 99.7439% ( 1) 00:13:19.333 7.840 - 7.893: 99.7543% ( 2) 00:13:19.333 7.893 - 7.947: 99.7595% ( 1) 00:13:19.333 7.947 - 8.000: 99.7700% ( 2) 00:13:19.333 8.000 - 8.053: 99.7804% ( 2) 00:13:19.333 8.107 - 8.160: 99.7909% ( 2) 00:13:19.333 8.160 - 8.213: 99.7961% ( 1) 00:13:19.333 8.213 - 8.267: 99.8014% ( 1) 00:13:19.333 8.373 - 8.427: 99.8066% ( 1) 00:13:19.333 8.427 - 8.480: 99.8223% ( 3) 00:13:19.333 8.533 - 8.587: 99.8327% ( 2) 00:13:19.333 8.587 - 8.640: 99.8380% ( 1) 00:13:19.333 8.693 - 8.747: 99.8484% ( 2) 00:13:19.333 8.800 - 8.853: 99.8589% ( 2) 00:13:19.333 [2024-07-16 00:24:32.548931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.333 8.853 - 8.907: 99.8693% ( 2) 00:13:19.333 8.907 - 8.960: 99.8798% ( 2) 00:13:19.333 8.960 - 9.013: 99.8850% ( 1) 00:13:19.333 9.387 - 9.440: 99.8902% ( 1) 00:13:19.333 12.907 - 12.960: 99.8955% ( 1) 00:13:19.333 13.653 - 13.760: 99.9007% ( 1) 00:13:19.333 13.867 - 13.973: 99.9059% ( 1) 00:13:19.333 3986.773 - 4014.080: 100.0000% ( 18) 00:13:19.333 00:13:19.333 Complete histogram 00:13:19.333 ================== 00:13:19.333 Range in us Cumulative Count 00:13:19.333 2.387 - 2.400: 0.3816% ( 73) 00:13:19.333 2.400 - 2.413: 0.8991% ( 99) 00:13:19.334 2.413 - 2.427: 1.0612% ( 31) 00:13:19.334 2.427 - 2.440: 1.2180% ( 30) 00:13:19.334 2.440 - 2.453: 32.2896% ( 5944) 00:13:19.334 2.453 - 2.467: 44.3492% ( 2307) 00:13:19.334 2.467 - 2.480: 65.4469% ( 4036) 00:13:19.334 2.480 - 2.493: 75.8547% ( 1991) 00:13:19.334 2.493 - 2.507: 80.2405% ( 839) 00:13:19.334 2.507 - 2.520: 83.5180% ( 627) 00:13:19.334 2.520 - 2.533: 88.3691% ( 928) 00:13:19.334 2.533 - 2.547: 92.8280% ( 853) 00:13:19.334 2.547 - 2.560: 95.9697% ( 601) 00:13:19.334 2.560 - 2.573: 98.1338% ( 414) 00:13:19.334 2.573 - 2.587: 99.0277% ( 171) 00:13:19.334 2.587 - 2.600: 99.2316% ( 39) 00:13:19.334 2.600 - 2.613: 99.2682% ( 7) 00:13:19.334 2.613 - 2.627: 99.2838% ( 3) 00:13:19.334 2.640 - 2.653: 99.2891% ( 1) 00:13:19.334 4.693 - 4.720: 99.2943% ( 1) 00:13:19.334 4.720 - 4.747: 99.2995% ( 1) 00:13:19.334 4.747 - 4.773: 99.3048% ( 1) 00:13:19.334 4.880 - 4.907: 99.3100% ( 1) 00:13:19.334 4.960 - 4.987: 99.3152% ( 1) 00:13:19.334 5.093 - 5.120: 99.3257% ( 2) 00:13:19.334 5.120 - 5.147: 99.3309% ( 1) 00:13:19.334 5.200 - 5.227: 99.3361% ( 1) 00:13:19.334 5.307 - 5.333: 99.3413% ( 1) 00:13:19.334 5.333 - 5.360: 99.3518% ( 2) 00:13:19.334 5.600 - 5.627: 99.3570% ( 1) 00:13:19.334 5.813 - 5.840: 99.3675% ( 2) 00:13:19.334 5.840 - 5.867: 99.3727% ( 1) 00:13:19.334 5.867 - 5.893: 99.3779% ( 1) 00:13:19.334 5.920 - 5.947: 99.3832% ( 1) 00:13:19.334 5.947 - 5.973: 99.3936% ( 2) 00:13:19.334 5.973 - 6.000: 99.4041% ( 2) 00:13:19.334 6.000 - 6.027: 99.4093% ( 1) 00:13:19.334 6.080 - 6.107: 99.4145% ( 1) 00:13:19.334 6.107 - 6.133: 99.4198% ( 1) 00:13:19.334 6.133 - 6.160: 99.4250% ( 1) 00:13:19.334 6.160 - 6.187: 99.4302% ( 1) 00:13:19.334 6.187 - 6.213: 99.4511% ( 4) 00:13:19.334 6.213 - 6.240: 99.4616% ( 2) 00:13:19.334 6.267 - 6.293: 99.4668% ( 1) 00:13:19.334 6.320 - 6.347: 99.4720% ( 1) 00:13:19.334 6.347 - 6.373: 99.4773% ( 1) 00:13:19.334 6.427 - 6.453: 99.4825% ( 1) 00:13:19.334 6.480 - 6.507: 99.4877% ( 1) 00:13:19.334 6.613 - 6.640: 99.4929% ( 1) 00:13:19.334 6.693 - 6.720: 99.4982% ( 1) 00:13:19.334 6.880 - 6.933: 99.5034% ( 1) 00:13:19.334 6.933 - 6.987: 99.5086% ( 1) 00:13:19.334 6.987 - 7.040: 99.5243% ( 3) 00:13:19.334 7.200 - 7.253: 99.5295% ( 1) 00:13:19.334 7.253 - 7.307: 99.5348% ( 1) 00:13:19.334 7.467 - 7.520: 99.5452% ( 2) 00:13:19.334 7.680 - 7.733: 99.5504% ( 1) 00:13:19.334 8.000 - 8.053: 99.5557% ( 1) 00:13:19.334 8.587 - 8.640: 99.5609% ( 1) 00:13:19.334 9.813 - 9.867: 99.5661% ( 1) 00:13:19.334 11.947 - 12.000: 99.5714% ( 1) 00:13:19.334 12.693 - 12.747: 99.5766% ( 1) 00:13:19.334 13.493 - 13.547: 99.5818% ( 1) 00:13:19.334 3768.320 - 3795.627: 99.5870% ( 1) 00:13:19.334 3986.773 - 4014.080: 100.0000% ( 79) 00:13:19.334 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:19.334 [ 00:13:19.334 { 00:13:19.334 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:19.334 "subtype": "Discovery", 00:13:19.334 "listen_addresses": [], 00:13:19.334 "allow_any_host": true, 00:13:19.334 "hosts": [] 00:13:19.334 }, 00:13:19.334 { 00:13:19.334 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:19.334 "subtype": "NVMe", 00:13:19.334 "listen_addresses": [ 00:13:19.334 { 00:13:19.334 "trtype": "VFIOUSER", 00:13:19.334 "adrfam": "IPv4", 00:13:19.334 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:19.334 "trsvcid": "0" 00:13:19.334 } 00:13:19.334 ], 00:13:19.334 "allow_any_host": true, 00:13:19.334 "hosts": [], 00:13:19.334 "serial_number": "SPDK1", 00:13:19.334 "model_number": "SPDK bdev Controller", 00:13:19.334 "max_namespaces": 32, 00:13:19.334 "min_cntlid": 1, 00:13:19.334 "max_cntlid": 65519, 00:13:19.334 "namespaces": [ 00:13:19.334 { 00:13:19.334 "nsid": 1, 00:13:19.334 "bdev_name": "Malloc1", 00:13:19.334 "name": "Malloc1", 00:13:19.334 "nguid": "6A60092579594D6592743B010670D04B", 00:13:19.334 "uuid": "6a600925-7959-4d65-9274-3b010670d04b" 00:13:19.334 }, 00:13:19.334 { 00:13:19.334 "nsid": 2, 00:13:19.334 "bdev_name": "Malloc3", 00:13:19.334 "name": "Malloc3", 00:13:19.334 "nguid": "9843DFA8CA6E49578F3A1DF2AF82C6BB", 00:13:19.334 "uuid": "9843dfa8-ca6e-4957-8f3a-1df2af82c6bb" 00:13:19.334 } 00:13:19.334 ] 00:13:19.334 }, 00:13:19.334 { 00:13:19.334 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:19.334 "subtype": "NVMe", 00:13:19.334 "listen_addresses": [ 00:13:19.334 { 00:13:19.334 "trtype": "VFIOUSER", 00:13:19.334 "adrfam": "IPv4", 00:13:19.334 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:19.334 "trsvcid": "0" 00:13:19.334 } 00:13:19.334 ], 00:13:19.334 "allow_any_host": true, 00:13:19.334 "hosts": [], 00:13:19.334 "serial_number": "SPDK2", 00:13:19.334 "model_number": "SPDK bdev Controller", 00:13:19.334 "max_namespaces": 32, 00:13:19.334 "min_cntlid": 1, 00:13:19.334 "max_cntlid": 65519, 00:13:19.334 "namespaces": [ 00:13:19.334 { 00:13:19.334 "nsid": 1, 00:13:19.334 "bdev_name": "Malloc2", 00:13:19.334 "name": "Malloc2", 00:13:19.334 "nguid": "E4381BE4FF9849008BD41592D5181F13", 00:13:19.334 "uuid": "e4381be4-ff98-4900-8bd4-1592d5181f13" 00:13:19.334 } 00:13:19.334 ] 00:13:19.334 } 00:13:19.334 ] 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1000463 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:19.334 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.334 Malloc4 00:13:19.334 00:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:19.334 [2024-07-16 00:24:32.950701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.594 [2024-07-16 00:24:33.082566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.594 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:19.594 Asynchronous Event Request test 00:13:19.594 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.594 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.594 Registering asynchronous event callbacks... 00:13:19.594 Starting namespace attribute notice tests for all controllers... 00:13:19.594 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:19.594 aer_cb - Changed Namespace 00:13:19.594 Cleaning up... 00:13:19.854 [ 00:13:19.854 { 00:13:19.854 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:19.854 "subtype": "Discovery", 00:13:19.854 "listen_addresses": [], 00:13:19.854 "allow_any_host": true, 00:13:19.854 "hosts": [] 00:13:19.854 }, 00:13:19.854 { 00:13:19.854 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:19.854 "subtype": "NVMe", 00:13:19.854 "listen_addresses": [ 00:13:19.854 { 00:13:19.854 "trtype": "VFIOUSER", 00:13:19.854 "adrfam": "IPv4", 00:13:19.854 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:19.854 "trsvcid": "0" 00:13:19.854 } 00:13:19.854 ], 00:13:19.854 "allow_any_host": true, 00:13:19.854 "hosts": [], 00:13:19.854 "serial_number": "SPDK1", 00:13:19.854 "model_number": "SPDK bdev Controller", 00:13:19.854 "max_namespaces": 32, 00:13:19.854 "min_cntlid": 1, 00:13:19.854 "max_cntlid": 65519, 00:13:19.854 "namespaces": [ 00:13:19.854 { 00:13:19.854 "nsid": 1, 00:13:19.854 "bdev_name": "Malloc1", 00:13:19.854 "name": "Malloc1", 00:13:19.854 "nguid": "6A60092579594D6592743B010670D04B", 00:13:19.854 "uuid": "6a600925-7959-4d65-9274-3b010670d04b" 00:13:19.854 }, 00:13:19.854 { 00:13:19.854 "nsid": 2, 00:13:19.854 "bdev_name": "Malloc3", 00:13:19.854 "name": "Malloc3", 00:13:19.854 "nguid": "9843DFA8CA6E49578F3A1DF2AF82C6BB", 00:13:19.854 "uuid": "9843dfa8-ca6e-4957-8f3a-1df2af82c6bb" 00:13:19.854 } 00:13:19.854 ] 00:13:19.854 }, 00:13:19.854 { 00:13:19.854 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:19.854 "subtype": "NVMe", 00:13:19.854 "listen_addresses": [ 00:13:19.854 { 00:13:19.854 "trtype": "VFIOUSER", 00:13:19.854 "adrfam": "IPv4", 00:13:19.854 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:19.854 "trsvcid": "0" 00:13:19.854 } 00:13:19.854 ], 00:13:19.854 "allow_any_host": true, 00:13:19.854 "hosts": [], 00:13:19.854 "serial_number": "SPDK2", 00:13:19.854 "model_number": "SPDK bdev Controller", 00:13:19.854 "max_namespaces": 32, 00:13:19.854 "min_cntlid": 1, 00:13:19.854 "max_cntlid": 65519, 00:13:19.854 "namespaces": [ 00:13:19.854 { 00:13:19.854 "nsid": 1, 00:13:19.854 "bdev_name": "Malloc2", 00:13:19.854 "name": "Malloc2", 00:13:19.854 "nguid": "E4381BE4FF9849008BD41592D5181F13", 00:13:19.854 "uuid": "e4381be4-ff98-4900-8bd4-1592d5181f13" 00:13:19.854 }, 00:13:19.854 { 00:13:19.854 "nsid": 2, 00:13:19.854 "bdev_name": "Malloc4", 00:13:19.854 "name": "Malloc4", 00:13:19.854 "nguid": "70BCC46F5C6141D69DE5C6CDE8DCE52F", 00:13:19.854 "uuid": "70bcc46f-5c61-41d6-9de5-c6cde8dce52f" 00:13:19.854 } 00:13:19.854 ] 00:13:19.854 } 00:13:19.854 ] 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1000463 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 990885 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 990885 ']' 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 990885 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 990885 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 990885' 00:13:19.854 killing process with pid 990885 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 990885 00:13:19.854 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 990885 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1000797 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1000797' 00:13:20.115 Process pid: 1000797 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1000797 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1000797 ']' 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.115 00:24:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:20.115 [2024-07-16 00:24:33.568763] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:20.115 [2024-07-16 00:24:33.569677] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:20.115 [2024-07-16 00:24:33.569719] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.115 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.115 [2024-07-16 00:24:33.637048] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.115 [2024-07-16 00:24:33.701049] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.115 [2024-07-16 00:24:33.701090] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.115 [2024-07-16 00:24:33.701097] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.115 [2024-07-16 00:24:33.701104] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.115 [2024-07-16 00:24:33.701110] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.115 [2024-07-16 00:24:33.701273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.115 [2024-07-16 00:24:33.701511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.115 [2024-07-16 00:24:33.701512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.115 [2024-07-16 00:24:33.701344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.375 [2024-07-16 00:24:33.766500] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:20.375 [2024-07-16 00:24:33.766553] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:20.375 [2024-07-16 00:24:33.767601] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:20.375 [2024-07-16 00:24:33.768038] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:20.375 [2024-07-16 00:24:33.768134] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:20.945 00:24:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.945 00:24:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:20.945 00:24:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:21.886 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:21.886 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:21.886 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:21.886 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:21.886 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:22.146 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:22.146 Malloc1 00:13:22.146 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:22.407 00:24:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:22.407 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:22.666 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.666 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:22.666 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:22.925 Malloc2 00:13:22.925 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:22.925 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:23.185 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:23.444 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:23.444 00:24:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1000797 00:13:23.444 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1000797 ']' 00:13:23.444 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1000797 00:13:23.444 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:23.444 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.445 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1000797 00:13:23.445 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.445 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.445 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1000797' 00:13:23.445 killing process with pid 1000797 00:13:23.445 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1000797 00:13:23.445 00:24:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1000797 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:23.705 00:13:23.705 real 0m50.604s 00:13:23.705 user 3m20.520s 00:13:23.705 sys 0m3.014s 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:23.705 ************************************ 00:13:23.705 END TEST nvmf_vfio_user 00:13:23.705 ************************************ 00:13:23.705 00:24:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:23.705 00:24:37 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:23.705 00:24:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:23.705 00:24:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.705 00:24:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.705 ************************************ 00:13:23.705 START TEST nvmf_vfio_user_nvme_compliance 00:13:23.705 ************************************ 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:23.705 * Looking for test storage... 00:13:23.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1001546 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1001546' 00:13:23.705 Process pid: 1001546 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1001546 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1001546 ']' 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.705 00:24:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:23.965 [2024-07-16 00:24:37.361861] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:23.965 [2024-07-16 00:24:37.361922] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.965 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.965 [2024-07-16 00:24:37.433885] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.965 [2024-07-16 00:24:37.508220] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.965 [2024-07-16 00:24:37.508263] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.965 [2024-07-16 00:24:37.508271] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.965 [2024-07-16 00:24:37.508277] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.965 [2024-07-16 00:24:37.508283] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.965 [2024-07-16 00:24:37.508426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.965 [2024-07-16 00:24:37.508543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.965 [2024-07-16 00:24:37.508546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.536 00:24:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.536 00:24:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:24.536 00:24:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.918 malloc0 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.918 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.919 00:24:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:25.919 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.919 00:13:25.919 00:13:25.919 CUnit - A unit testing framework for C - Version 2.1-3 00:13:25.919 http://cunit.sourceforge.net/ 00:13:25.919 00:13:25.919 00:13:25.919 Suite: nvme_compliance 00:13:25.919 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-16 00:24:39.395706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.919 [2024-07-16 00:24:39.397041] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:25.919 [2024-07-16 00:24:39.397051] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:25.919 [2024-07-16 00:24:39.397055] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:25.919 [2024-07-16 00:24:39.398728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.919 passed 00:13:25.919 Test: admin_identify_ctrlr_verify_fused ...[2024-07-16 00:24:39.494330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.919 [2024-07-16 00:24:39.497352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.919 passed 00:13:26.179 Test: admin_identify_ns ...[2024-07-16 00:24:39.592479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.179 [2024-07-16 00:24:39.652244] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:26.179 [2024-07-16 00:24:39.660251] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:26.179 [2024-07-16 00:24:39.681348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.179 passed 00:13:26.179 Test: admin_get_features_mandatory_features ...[2024-07-16 00:24:39.776430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.179 [2024-07-16 00:24:39.779442] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.440 passed 00:13:26.440 Test: admin_get_features_optional_features ...[2024-07-16 00:24:39.873979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.440 [2024-07-16 00:24:39.876995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.440 passed 00:13:26.440 Test: admin_set_features_number_of_queues ...[2024-07-16 00:24:39.970485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.700 [2024-07-16 00:24:40.074362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.700 passed 00:13:26.700 Test: admin_get_log_page_mandatory_logs ...[2024-07-16 00:24:40.168364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.700 [2024-07-16 00:24:40.171391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.700 passed 00:13:26.700 Test: admin_get_log_page_with_lpo ...[2024-07-16 00:24:40.264484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.961 [2024-07-16 00:24:40.332245] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:26.961 [2024-07-16 00:24:40.345301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.961 passed 00:13:26.961 Test: fabric_property_get ...[2024-07-16 00:24:40.439349] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.961 [2024-07-16 00:24:40.440596] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:26.961 [2024-07-16 00:24:40.442369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.961 passed 00:13:26.961 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-16 00:24:40.535907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.961 [2024-07-16 00:24:40.537162] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:26.961 [2024-07-16 00:24:40.539932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.961 passed 00:13:27.222 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-16 00:24:40.632091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.222 [2024-07-16 00:24:40.716238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:27.222 [2024-07-16 00:24:40.732237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:27.222 [2024-07-16 00:24:40.737328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:27.222 passed 00:13:27.222 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-16 00:24:40.828908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.222 [2024-07-16 00:24:40.830152] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:27.222 [2024-07-16 00:24:40.831926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:27.483 passed 00:13:27.483 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-16 00:24:40.925023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.483 [2024-07-16 00:24:41.002244] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:27.483 [2024-07-16 00:24:41.026242] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:27.483 [2024-07-16 00:24:41.031333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:27.483 passed 00:13:27.743 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-16 00:24:41.122086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.743 [2024-07-16 00:24:41.123329] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:27.743 [2024-07-16 00:24:41.123348] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:27.743 [2024-07-16 00:24:41.127112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:27.743 passed 00:13:27.743 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-16 00:24:41.219211] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.743 [2024-07-16 00:24:41.312238] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:27.743 [2024-07-16 00:24:41.320236] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:27.743 [2024-07-16 00:24:41.328237] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:27.743 [2024-07-16 00:24:41.336239] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:27.743 [2024-07-16 00:24:41.365322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:28.002 passed 00:13:28.002 Test: admin_create_io_sq_verify_pc ...[2024-07-16 00:24:41.457950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:28.002 [2024-07-16 00:24:41.474245] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:28.002 [2024-07-16 00:24:41.492110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:28.002 passed 00:13:28.003 Test: admin_create_io_qp_max_qps ...[2024-07-16 00:24:41.585674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.383 [2024-07-16 00:24:42.684241] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:29.643 [2024-07-16 00:24:43.068945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.643 passed 00:13:29.643 Test: admin_create_io_sq_shared_cq ...[2024-07-16 00:24:43.162663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.902 [2024-07-16 00:24:43.294240] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:29.902 [2024-07-16 00:24:43.331292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.902 passed 00:13:29.902 00:13:29.902 Run Summary: Type Total Ran Passed Failed Inactive 00:13:29.902 suites 1 1 n/a 0 0 00:13:29.902 tests 18 18 18 0 0 00:13:29.902 asserts 360 360 360 0 n/a 00:13:29.902 00:13:29.902 Elapsed time = 1.648 seconds 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1001546 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1001546 ']' 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1001546 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1001546 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1001546' 00:13:29.902 killing process with pid 1001546 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1001546 00:13:29.902 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1001546 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:30.161 00:13:30.161 real 0m6.413s 00:13:30.161 user 0m18.313s 00:13:30.161 sys 0m0.469s 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:30.161 ************************************ 00:13:30.161 END TEST nvmf_vfio_user_nvme_compliance 00:13:30.161 ************************************ 00:13:30.161 00:24:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:30.161 00:24:43 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:30.161 00:24:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.161 00:24:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.161 00:24:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.161 ************************************ 00:13:30.161 START TEST nvmf_vfio_user_fuzz 00:13:30.161 ************************************ 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:30.161 * Looking for test storage... 00:13:30.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.161 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1002942 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1002942' 00:13:30.421 Process pid: 1002942 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1002942 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1002942 ']' 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.421 00:24:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:31.360 00:24:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.360 00:24:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:31.360 00:24:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.302 malloc0 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:32.302 00:24:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:04.397 Fuzzing completed. Shutting down the fuzz application 00:14:04.397 00:14:04.397 Dumping successful admin opcodes: 00:14:04.397 8, 9, 10, 24, 00:14:04.397 Dumping successful io opcodes: 00:14:04.397 0, 00:14:04.397 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1126957, total successful commands: 4437, random_seed: 463469440 00:14:04.397 NS: 0x200003a1ef00 admin qp, Total commands completed: 141800, total successful commands: 1150, random_seed: 3128247488 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1002942 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1002942 ']' 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1002942 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1002942 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1002942' 00:14:04.397 killing process with pid 1002942 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1002942 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1002942 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:04.397 00:14:04.397 real 0m33.690s 00:14:04.397 user 0m38.027s 00:14:04.397 sys 0m25.946s 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.397 00:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 ************************************ 00:14:04.397 END TEST nvmf_vfio_user_fuzz 00:14:04.397 ************************************ 00:14:04.397 00:25:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:04.397 00:25:17 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:04.397 00:25:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:04.397 00:25:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.397 00:25:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 ************************************ 00:14:04.397 START TEST nvmf_host_management 00:14:04.398 ************************************ 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:04.398 * Looking for test storage... 00:14:04.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.398 00:25:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:12.533 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:12.533 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:12.533 Found net devices under 0000:31:00.0: cvl_0_0 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:12.533 Found net devices under 0000:31:00.1: cvl_0_1 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:14:12.533 00:14:12.533 --- 10.0.0.2 ping statistics --- 00:14:12.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.533 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:14:12.533 00:14:12.533 --- 10.0.0.1 ping statistics --- 00:14:12.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.533 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1013617 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1013617 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1013617 ']' 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.533 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.534 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.534 00:25:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.534 [2024-07-16 00:25:25.819573] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:12.534 [2024-07-16 00:25:25.819620] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.534 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.534 [2024-07-16 00:25:25.886562] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.534 [2024-07-16 00:25:25.943705] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.534 [2024-07-16 00:25:25.943740] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.534 [2024-07-16 00:25:25.943746] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.534 [2024-07-16 00:25:25.943750] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.534 [2024-07-16 00:25:25.943755] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.534 [2024-07-16 00:25:25.943860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.534 [2024-07-16 00:25:25.944020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.534 [2024-07-16 00:25:25.944171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.534 [2024-07-16 00:25:25.944174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.105 [2024-07-16 00:25:26.664997] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.105 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.105 Malloc0 00:14:13.105 [2024-07-16 00:25:26.728464] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1013888 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1013888 /var/tmp/bdevperf.sock 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1013888 ']' 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.365 { 00:14:13.365 "params": { 00:14:13.365 "name": "Nvme$subsystem", 00:14:13.365 "trtype": "$TEST_TRANSPORT", 00:14:13.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.365 "adrfam": "ipv4", 00:14:13.365 "trsvcid": "$NVMF_PORT", 00:14:13.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.365 "hdgst": ${hdgst:-false}, 00:14:13.365 "ddgst": ${ddgst:-false} 00:14:13.365 }, 00:14:13.365 "method": "bdev_nvme_attach_controller" 00:14:13.365 } 00:14:13.365 EOF 00:14:13.365 )") 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:13.365 00:25:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.365 "params": { 00:14:13.365 "name": "Nvme0", 00:14:13.365 "trtype": "tcp", 00:14:13.365 "traddr": "10.0.0.2", 00:14:13.365 "adrfam": "ipv4", 00:14:13.365 "trsvcid": "4420", 00:14:13.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.365 "hdgst": false, 00:14:13.365 "ddgst": false 00:14:13.365 }, 00:14:13.365 "method": "bdev_nvme_attach_controller" 00:14:13.365 }' 00:14:13.365 [2024-07-16 00:25:26.839225] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:13.365 [2024-07-16 00:25:26.839311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013888 ] 00:14:13.365 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.365 [2024-07-16 00:25:26.908138] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.365 [2024-07-16 00:25:26.973057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.626 Running I/O for 10 seconds... 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.198 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.198 [2024-07-16 00:25:27.694995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.198 [2024-07-16 00:25:27.695037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.198 [2024-07-16 00:25:27.695055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.198 [2024-07-16 00:25:27.695063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.198 [2024-07-16 00:25:27.695074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.198 [2024-07-16 00:25:27.695087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.198 [2024-07-16 00:25:27.695096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.198 [2024-07-16 00:25:27.695103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.198 [2024-07-16 00:25:27.695113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.198 [2024-07-16 00:25:27.695120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.198 [2024-07-16 00:25:27.695129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.198 [2024-07-16 00:25:27.695137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.198 [2024-07-16 00:25:27.695147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.198 [2024-07-16 00:25:27.695154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.199 [2024-07-16 00:25:27.695871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.199 [2024-07-16 00:25:27.695879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.695889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.695896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.695905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.695912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.695921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.695931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.695941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.695948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.695957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.695966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.695975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.695983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.695992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.695999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.696015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.696031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.696048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.696064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.696080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.696097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.200 [2024-07-16 00:25:27.696113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.696122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f01560 is same with the state(5) to be set 00:14:14.200 [2024-07-16 00:25:27.696161] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f01560 was disconnected and freed. reset controller. 00:14:14.200 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.200 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.200 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.200 [2024-07-16 00:25:27.697381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:14.200 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.200 task offset: 81792 on job bdev=Nvme0n1 fails 00:14:14.200 00:14:14.200 Latency(us) 00:14:14.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.200 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.200 Job: Nvme0n1 ended in about 0.45 seconds with error 00:14:14.200 Verification LBA range: start 0x0 length 0x400 00:14:14.200 Nvme0n1 : 0.45 1271.15 79.45 141.24 0.00 44051.69 5188.27 36263.25 00:14:14.200 =================================================================================================================== 00:14:14.200 Total : 1271.15 79.45 141.24 0.00 44051.69 5188.27 36263.25 00:14:14.200 [2024-07-16 00:25:27.699390] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:14.200 [2024-07-16 00:25:27.699414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af06e0 (9): Bad file descriptor 00:14:14.200 [2024-07-16 00:25:27.701251] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:14.200 [2024-07-16 00:25:27.701348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:14.200 [2024-07-16 00:25:27.701369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.200 [2024-07-16 00:25:27.701384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:14.200 [2024-07-16 00:25:27.701392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:14.200 [2024-07-16 00:25:27.701399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:14.200 [2024-07-16 00:25:27.701406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1af06e0 00:14:14.200 [2024-07-16 00:25:27.701424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af06e0 (9): Bad file descriptor 00:14:14.200 [2024-07-16 00:25:27.701437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:14.200 [2024-07-16 00:25:27.701443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:14.200 [2024-07-16 00:25:27.701451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:14.200 [2024-07-16 00:25:27.701463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:14.200 00:25:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.200 00:25:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1013888 00:14:15.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1013888) - No such process 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:15.140 { 00:14:15.140 "params": { 00:14:15.140 "name": "Nvme$subsystem", 00:14:15.140 "trtype": "$TEST_TRANSPORT", 00:14:15.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:15.140 "adrfam": "ipv4", 00:14:15.140 "trsvcid": "$NVMF_PORT", 00:14:15.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:15.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:15.140 "hdgst": ${hdgst:-false}, 00:14:15.140 "ddgst": ${ddgst:-false} 00:14:15.140 }, 00:14:15.140 "method": "bdev_nvme_attach_controller" 00:14:15.140 } 00:14:15.140 EOF 00:14:15.140 )") 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:15.140 00:25:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:15.140 "params": { 00:14:15.140 "name": "Nvme0", 00:14:15.140 "trtype": "tcp", 00:14:15.140 "traddr": "10.0.0.2", 00:14:15.140 "adrfam": "ipv4", 00:14:15.140 "trsvcid": "4420", 00:14:15.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:15.140 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:15.140 "hdgst": false, 00:14:15.140 "ddgst": false 00:14:15.140 }, 00:14:15.140 "method": "bdev_nvme_attach_controller" 00:14:15.140 }' 00:14:15.140 [2024-07-16 00:25:28.766991] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:15.140 [2024-07-16 00:25:28.767045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014335 ] 00:14:15.400 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.400 [2024-07-16 00:25:28.832765] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.400 [2024-07-16 00:25:28.895978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.660 Running I/O for 1 seconds... 00:14:16.601 00:14:16.601 Latency(us) 00:14:16.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.601 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:16.601 Verification LBA range: start 0x0 length 0x400 00:14:16.601 Nvme0n1 : 1.02 1752.79 109.55 0.00 0.00 35850.50 5870.93 32768.00 00:14:16.601 =================================================================================================================== 00:14:16.601 Total : 1752.79 109.55 0.00 0.00 35850.50 5870.93 32768.00 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.862 rmmod nvme_tcp 00:14:16.862 rmmod nvme_fabrics 00:14:16.862 rmmod nvme_keyring 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1013617 ']' 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1013617 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1013617 ']' 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1013617 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1013617 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1013617' 00:14:16.862 killing process with pid 1013617 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1013617 00:14:16.862 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1013617 00:14:17.122 [2024-07-16 00:25:30.564321] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.122 00:25:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.034 00:25:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.034 00:25:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:19.034 00:14:19.034 real 0m15.235s 00:14:19.034 user 0m23.453s 00:14:19.034 sys 0m6.948s 00:14:19.034 00:25:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.034 00:25:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.034 ************************************ 00:14:19.034 END TEST nvmf_host_management 00:14:19.034 ************************************ 00:14:19.295 00:25:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:19.295 00:25:32 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:19.295 00:25:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:19.295 00:25:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.295 00:25:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.295 ************************************ 00:14:19.295 START TEST nvmf_lvol 00:14:19.295 ************************************ 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:19.295 * Looking for test storage... 00:14:19.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.295 00:25:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:27.513 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:27.513 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:27.513 Found net devices under 0000:31:00.0: cvl_0_0 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:27.513 Found net devices under 0000:31:00.1: cvl_0_1 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.513 00:25:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:14:27.513 00:14:27.513 --- 10.0.0.2 ping statistics --- 00:14:27.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.513 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:14:27.513 00:14:27.513 --- 10.0.0.1 ping statistics --- 00:14:27.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.513 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.513 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1019360 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1019360 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1019360 ']' 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.773 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:27.773 [2024-07-16 00:25:41.228754] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:27.773 [2024-07-16 00:25:41.228822] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.773 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.773 [2024-07-16 00:25:41.307338] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:27.773 [2024-07-16 00:25:41.381794] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.773 [2024-07-16 00:25:41.381832] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.773 [2024-07-16 00:25:41.381840] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.773 [2024-07-16 00:25:41.381847] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.773 [2024-07-16 00:25:41.381852] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.773 [2024-07-16 00:25:41.381998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.773 [2024-07-16 00:25:41.382113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.773 [2024-07-16 00:25:41.382116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.715 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.715 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:28.715 00:25:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.715 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.715 00:25:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:28.715 00:25:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.715 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:28.715 [2024-07-16 00:25:42.178556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.715 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.975 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:28.975 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.975 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:28.975 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:29.234 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:29.493 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=69895ad9-30d3-41d0-9a58-ef4edbfa2a77 00:14:29.493 00:25:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69895ad9-30d3-41d0-9a58-ef4edbfa2a77 lvol 20 00:14:29.493 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f99a523c-8842-405b-8f4e-80925f4cb2f4 00:14:29.493 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:29.752 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f99a523c-8842-405b-8f4e-80925f4cb2f4 00:14:30.011 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:30.011 [2024-07-16 00:25:43.576964] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.011 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:30.270 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1019827 00:14:30.270 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:30.270 00:25:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:30.270 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.219 00:25:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f99a523c-8842-405b-8f4e-80925f4cb2f4 MY_SNAPSHOT 00:14:31.478 00:25:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=52de2e0c-0a82-4e6a-a027-0ed3e21f6a59 00:14:31.478 00:25:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f99a523c-8842-405b-8f4e-80925f4cb2f4 30 00:14:31.737 00:25:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 52de2e0c-0a82-4e6a-a027-0ed3e21f6a59 MY_CLONE 00:14:31.737 00:25:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f5845705-74e3-4a03-a861-b8189941762e 00:14:31.737 00:25:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f5845705-74e3-4a03-a861-b8189941762e 00:14:32.308 00:25:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1019827 00:14:40.436 Initializing NVMe Controllers 00:14:40.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:40.436 Controller IO queue size 128, less than required. 00:14:40.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:40.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:40.436 Initialization complete. Launching workers. 00:14:40.436 ======================================================== 00:14:40.436 Latency(us) 00:14:40.436 Device Information : IOPS MiB/s Average min max 00:14:40.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12323.50 48.14 10391.29 1464.07 64357.55 00:14:40.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17493.50 68.33 7318.33 686.49 51562.42 00:14:40.436 ======================================================== 00:14:40.436 Total : 29817.00 116.47 8588.40 686.49 64357.55 00:14:40.436 00:14:40.436 00:25:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:40.696 00:25:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f99a523c-8842-405b-8f4e-80925f4cb2f4 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69895ad9-30d3-41d0-9a58-ef4edbfa2a77 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.957 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.957 rmmod nvme_tcp 00:14:41.217 rmmod nvme_fabrics 00:14:41.217 rmmod nvme_keyring 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1019360 ']' 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1019360 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1019360 ']' 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1019360 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019360 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019360' 00:14:41.217 killing process with pid 1019360 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1019360 00:14:41.217 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1019360 00:14:41.477 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.478 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.478 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.478 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.478 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.478 00:25:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.478 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.478 00:25:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.389 00:25:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.389 00:14:43.389 real 0m24.180s 00:14:43.389 user 1m3.824s 00:14:43.389 sys 0m8.523s 00:14:43.389 00:25:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.389 00:25:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:43.389 ************************************ 00:14:43.389 END TEST nvmf_lvol 00:14:43.389 ************************************ 00:14:43.389 00:25:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:43.389 00:25:56 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:43.389 00:25:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.389 00:25:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.389 00:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.389 ************************************ 00:14:43.389 START TEST nvmf_lvs_grow 00:14:43.389 ************************************ 00:14:43.389 00:25:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:43.651 * Looking for test storage... 00:14:43.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.651 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.652 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.652 00:25:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.652 00:25:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:51.799 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:51.799 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:51.799 Found net devices under 0000:31:00.0: cvl_0_0 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:51.799 Found net devices under 0000:31:00.1: cvl_0_1 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.799 00:26:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:14:51.799 00:14:51.799 --- 10.0.0.2 ping statistics --- 00:14:51.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.799 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:14:51.799 00:14:51.799 --- 10.0.0.1 ping statistics --- 00:14:51.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.799 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1026759 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1026759 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1026759 ']' 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.799 00:26:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:51.799 [2024-07-16 00:26:05.388115] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:51.800 [2024-07-16 00:26:05.388183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.800 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.060 [2024-07-16 00:26:05.466853] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.060 [2024-07-16 00:26:05.539834] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.060 [2024-07-16 00:26:05.539872] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.060 [2024-07-16 00:26:05.539880] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.060 [2024-07-16 00:26:05.539886] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.060 [2024-07-16 00:26:05.539891] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.060 [2024-07-16 00:26:05.539910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.631 00:26:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.631 00:26:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:52.631 00:26:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.631 00:26:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.631 00:26:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:52.631 00:26:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.631 00:26:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:52.893 [2024-07-16 00:26:06.335039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:52.893 ************************************ 00:14:52.893 START TEST lvs_grow_clean 00:14:52.893 ************************************ 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:52.893 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:53.154 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:53.154 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:53.154 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:14:53.154 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:14:53.154 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:53.414 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:53.414 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:53.414 00:26:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 lvol 150 00:14:53.674 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=01b8f96c-e4e1-4711-829a-b0a33ad23348 00:14:53.674 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:53.674 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:53.674 [2024-07-16 00:26:07.205265] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:53.674 [2024-07-16 00:26:07.205317] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:53.674 true 00:14:53.674 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:14:53.674 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:53.935 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:53.935 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:53.935 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01b8f96c-e4e1-4711-829a-b0a33ad23348 00:14:54.195 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:54.195 [2024-07-16 00:26:07.807119] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.195 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1027195 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1027195 /var/tmp/bdevperf.sock 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1027195 ']' 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.455 00:26:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:54.455 [2024-07-16 00:26:08.023100] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:54.455 [2024-07-16 00:26:08.023153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027195 ] 00:14:54.455 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.715 [2024-07-16 00:26:08.104160] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.715 [2024-07-16 00:26:08.169255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.284 00:26:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.284 00:26:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:55.284 00:26:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:55.544 Nvme0n1 00:14:55.803 00:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:55.803 [ 00:14:55.803 { 00:14:55.803 "name": "Nvme0n1", 00:14:55.803 "aliases": [ 00:14:55.803 "01b8f96c-e4e1-4711-829a-b0a33ad23348" 00:14:55.803 ], 00:14:55.803 "product_name": "NVMe disk", 00:14:55.803 "block_size": 4096, 00:14:55.803 "num_blocks": 38912, 00:14:55.803 "uuid": "01b8f96c-e4e1-4711-829a-b0a33ad23348", 00:14:55.803 "assigned_rate_limits": { 00:14:55.803 "rw_ios_per_sec": 0, 00:14:55.803 "rw_mbytes_per_sec": 0, 00:14:55.803 "r_mbytes_per_sec": 0, 00:14:55.803 "w_mbytes_per_sec": 0 00:14:55.803 }, 00:14:55.803 "claimed": false, 00:14:55.803 "zoned": false, 00:14:55.803 "supported_io_types": { 00:14:55.803 "read": true, 00:14:55.803 "write": true, 00:14:55.803 "unmap": true, 00:14:55.803 "flush": true, 00:14:55.803 "reset": true, 00:14:55.803 "nvme_admin": true, 00:14:55.803 "nvme_io": true, 00:14:55.803 "nvme_io_md": false, 00:14:55.803 "write_zeroes": true, 00:14:55.803 "zcopy": false, 00:14:55.803 "get_zone_info": false, 00:14:55.803 "zone_management": false, 00:14:55.803 "zone_append": false, 00:14:55.803 "compare": true, 00:14:55.803 "compare_and_write": true, 00:14:55.803 "abort": true, 00:14:55.803 "seek_hole": false, 00:14:55.803 "seek_data": false, 00:14:55.803 "copy": true, 00:14:55.803 "nvme_iov_md": false 00:14:55.803 }, 00:14:55.803 "memory_domains": [ 00:14:55.803 { 00:14:55.803 "dma_device_id": "system", 00:14:55.803 "dma_device_type": 1 00:14:55.803 } 00:14:55.803 ], 00:14:55.803 "driver_specific": { 00:14:55.803 "nvme": [ 00:14:55.803 { 00:14:55.803 "trid": { 00:14:55.803 "trtype": "TCP", 00:14:55.803 "adrfam": "IPv4", 00:14:55.803 "traddr": "10.0.0.2", 00:14:55.803 "trsvcid": "4420", 00:14:55.803 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:55.803 }, 00:14:55.803 "ctrlr_data": { 00:14:55.803 "cntlid": 1, 00:14:55.803 "vendor_id": "0x8086", 00:14:55.803 "model_number": "SPDK bdev Controller", 00:14:55.803 "serial_number": "SPDK0", 00:14:55.804 "firmware_revision": "24.09", 00:14:55.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:55.804 "oacs": { 00:14:55.804 "security": 0, 00:14:55.804 "format": 0, 00:14:55.804 "firmware": 0, 00:14:55.804 "ns_manage": 0 00:14:55.804 }, 00:14:55.804 "multi_ctrlr": true, 00:14:55.804 "ana_reporting": false 00:14:55.804 }, 00:14:55.804 "vs": { 00:14:55.804 "nvme_version": "1.3" 00:14:55.804 }, 00:14:55.804 "ns_data": { 00:14:55.804 "id": 1, 00:14:55.804 "can_share": true 00:14:55.804 } 00:14:55.804 } 00:14:55.804 ], 00:14:55.804 "mp_policy": "active_passive" 00:14:55.804 } 00:14:55.804 } 00:14:55.804 ] 00:14:55.804 00:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1027487 00:14:55.804 00:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:55.804 00:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:55.804 Running I/O for 10 seconds... 00:14:57.189 Latency(us) 00:14:57.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.189 Nvme0n1 : 1.00 17929.00 70.04 0.00 0.00 0.00 0.00 0.00 00:14:57.189 =================================================================================================================== 00:14:57.189 Total : 17929.00 70.04 0.00 0.00 0.00 0.00 0.00 00:14:57.189 00:14:57.760 00:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:14:58.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.022 Nvme0n1 : 2.00 18147.00 70.89 0.00 0.00 0.00 0.00 0.00 00:14:58.022 =================================================================================================================== 00:14:58.022 Total : 18147.00 70.89 0.00 0.00 0.00 0.00 0.00 00:14:58.022 00:14:58.022 true 00:14:58.022 00:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:14:58.022 00:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:58.283 00:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:58.283 00:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:58.283 00:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1027487 00:14:58.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.852 Nvme0n1 : 3.00 18219.33 71.17 0.00 0.00 0.00 0.00 0.00 00:14:58.852 =================================================================================================================== 00:14:58.852 Total : 18219.33 71.17 0.00 0.00 0.00 0.00 0.00 00:14:58.852 00:15:00.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.238 Nvme0n1 : 4.00 18255.75 71.31 0.00 0.00 0.00 0.00 0.00 00:15:00.238 =================================================================================================================== 00:15:00.238 Total : 18255.75 71.31 0.00 0.00 0.00 0.00 0.00 00:15:00.238 00:15:01.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.182 Nvme0n1 : 5.00 18290.40 71.45 0.00 0.00 0.00 0.00 0.00 00:15:01.182 =================================================================================================================== 00:15:01.182 Total : 18290.40 71.45 0.00 0.00 0.00 0.00 0.00 00:15:01.182 00:15:02.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.125 Nvme0n1 : 6.00 18303.00 71.50 0.00 0.00 0.00 0.00 0.00 00:15:02.125 =================================================================================================================== 00:15:02.125 Total : 18303.00 71.50 0.00 0.00 0.00 0.00 0.00 00:15:02.125 00:15:03.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.067 Nvme0n1 : 7.00 18330.57 71.60 0.00 0.00 0.00 0.00 0.00 00:15:03.067 =================================================================================================================== 00:15:03.067 Total : 18330.57 71.60 0.00 0.00 0.00 0.00 0.00 00:15:03.067 00:15:04.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.008 Nvme0n1 : 8.00 18342.88 71.65 0.00 0.00 0.00 0.00 0.00 00:15:04.008 =================================================================================================================== 00:15:04.008 Total : 18342.88 71.65 0.00 0.00 0.00 0.00 0.00 00:15:04.008 00:15:04.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.947 Nvme0n1 : 9.00 18352.78 71.69 0.00 0.00 0.00 0.00 0.00 00:15:04.947 =================================================================================================================== 00:15:04.947 Total : 18352.78 71.69 0.00 0.00 0.00 0.00 0.00 00:15:04.947 00:15:05.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.886 Nvme0n1 : 10.00 18366.90 71.75 0.00 0.00 0.00 0.00 0.00 00:15:05.886 =================================================================================================================== 00:15:05.886 Total : 18366.90 71.75 0.00 0.00 0.00 0.00 0.00 00:15:05.886 00:15:05.886 00:15:05.886 Latency(us) 00:15:05.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.886 Nvme0n1 : 10.01 18366.76 71.75 0.00 0.00 6965.60 4314.45 16711.68 00:15:05.886 =================================================================================================================== 00:15:05.886 Total : 18366.76 71.75 0.00 0.00 6965.60 4314.45 16711.68 00:15:05.886 0 00:15:05.886 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1027195 00:15:05.886 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1027195 ']' 00:15:05.886 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1027195 00:15:05.886 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:05.886 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.886 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027195 00:15:06.146 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.146 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.146 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027195' 00:15:06.146 killing process with pid 1027195 00:15:06.146 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1027195 00:15:06.146 Received shutdown signal, test time was about 10.000000 seconds 00:15:06.146 00:15:06.146 Latency(us) 00:15:06.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.146 =================================================================================================================== 00:15:06.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.146 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1027195 00:15:06.146 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.406 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:06.406 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:15:06.406 00:26:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:06.666 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:06.666 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:06.666 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:06.666 [2024-07-16 00:26:20.275403] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:15:06.926 request: 00:15:06.926 { 00:15:06.926 "uuid": "3e40f9a7-e0b8-4563-a0df-ff16846f2291", 00:15:06.926 "method": "bdev_lvol_get_lvstores", 00:15:06.926 "req_id": 1 00:15:06.926 } 00:15:06.926 Got JSON-RPC error response 00:15:06.926 response: 00:15:06.926 { 00:15:06.926 "code": -19, 00:15:06.926 "message": "No such device" 00:15:06.926 } 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:06.926 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:07.185 aio_bdev 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 01b8f96c-e4e1-4711-829a-b0a33ad23348 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=01b8f96c-e4e1-4711-829a-b0a33ad23348 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:07.185 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01b8f96c-e4e1-4711-829a-b0a33ad23348 -t 2000 00:15:07.445 [ 00:15:07.445 { 00:15:07.445 "name": "01b8f96c-e4e1-4711-829a-b0a33ad23348", 00:15:07.445 "aliases": [ 00:15:07.445 "lvs/lvol" 00:15:07.445 ], 00:15:07.445 "product_name": "Logical Volume", 00:15:07.445 "block_size": 4096, 00:15:07.445 "num_blocks": 38912, 00:15:07.445 "uuid": "01b8f96c-e4e1-4711-829a-b0a33ad23348", 00:15:07.445 "assigned_rate_limits": { 00:15:07.445 "rw_ios_per_sec": 0, 00:15:07.445 "rw_mbytes_per_sec": 0, 00:15:07.445 "r_mbytes_per_sec": 0, 00:15:07.445 "w_mbytes_per_sec": 0 00:15:07.445 }, 00:15:07.445 "claimed": false, 00:15:07.445 "zoned": false, 00:15:07.445 "supported_io_types": { 00:15:07.445 "read": true, 00:15:07.445 "write": true, 00:15:07.445 "unmap": true, 00:15:07.445 "flush": false, 00:15:07.445 "reset": true, 00:15:07.445 "nvme_admin": false, 00:15:07.445 "nvme_io": false, 00:15:07.445 "nvme_io_md": false, 00:15:07.445 "write_zeroes": true, 00:15:07.445 "zcopy": false, 00:15:07.445 "get_zone_info": false, 00:15:07.445 "zone_management": false, 00:15:07.445 "zone_append": false, 00:15:07.445 "compare": false, 00:15:07.445 "compare_and_write": false, 00:15:07.445 "abort": false, 00:15:07.445 "seek_hole": true, 00:15:07.445 "seek_data": true, 00:15:07.445 "copy": false, 00:15:07.445 "nvme_iov_md": false 00:15:07.445 }, 00:15:07.445 "driver_specific": { 00:15:07.445 "lvol": { 00:15:07.445 "lvol_store_uuid": "3e40f9a7-e0b8-4563-a0df-ff16846f2291", 00:15:07.445 "base_bdev": "aio_bdev", 00:15:07.445 "thin_provision": false, 00:15:07.445 "num_allocated_clusters": 38, 00:15:07.445 "snapshot": false, 00:15:07.445 "clone": false, 00:15:07.445 "esnap_clone": false 00:15:07.445 } 00:15:07.445 } 00:15:07.445 } 00:15:07.445 ] 00:15:07.445 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:07.445 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:07.445 00:26:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:15:07.704 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:07.704 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:15:07.704 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:07.704 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:07.704 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01b8f96c-e4e1-4711-829a-b0a33ad23348 00:15:07.964 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e40f9a7-e0b8-4563-a0df-ff16846f2291 00:15:07.964 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:08.225 00:15:08.225 real 0m15.389s 00:15:08.225 user 0m15.039s 00:15:08.225 sys 0m1.276s 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:08.225 ************************************ 00:15:08.225 END TEST lvs_grow_clean 00:15:08.225 ************************************ 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:08.225 ************************************ 00:15:08.225 START TEST lvs_grow_dirty 00:15:08.225 ************************************ 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:08.225 00:26:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:08.489 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:08.489 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:08.768 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:08.768 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:08.768 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:08.768 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:08.768 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:08.768 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb54f36b-15e2-4840-a83d-998b8efb3ace lvol 150 00:15:09.067 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=517a5fa5-7f8c-4201-8f23-61cb8bc47da0 00:15:09.067 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:09.067 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:09.067 [2024-07-16 00:26:22.638720] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:09.067 [2024-07-16 00:26:22.638771] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:09.067 true 00:15:09.067 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:09.067 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:09.344 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:09.344 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:09.344 00:26:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 517a5fa5-7f8c-4201-8f23-61cb8bc47da0 00:15:09.603 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:09.863 [2024-07-16 00:26:23.240677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1030250 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1030250 /var/tmp/bdevperf.sock 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1030250 ']' 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.863 00:26:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:09.863 [2024-07-16 00:26:23.454067] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:09.864 [2024-07-16 00:26:23.454119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030250 ] 00:15:09.864 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.123 [2024-07-16 00:26:23.535546] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.123 [2024-07-16 00:26:23.589465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.692 00:26:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.692 00:26:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:10.692 00:26:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:10.952 Nvme0n1 00:15:10.952 00:26:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:11.211 [ 00:15:11.211 { 00:15:11.211 "name": "Nvme0n1", 00:15:11.211 "aliases": [ 00:15:11.211 "517a5fa5-7f8c-4201-8f23-61cb8bc47da0" 00:15:11.211 ], 00:15:11.211 "product_name": "NVMe disk", 00:15:11.211 "block_size": 4096, 00:15:11.211 "num_blocks": 38912, 00:15:11.211 "uuid": "517a5fa5-7f8c-4201-8f23-61cb8bc47da0", 00:15:11.211 "assigned_rate_limits": { 00:15:11.211 "rw_ios_per_sec": 0, 00:15:11.211 "rw_mbytes_per_sec": 0, 00:15:11.211 "r_mbytes_per_sec": 0, 00:15:11.211 "w_mbytes_per_sec": 0 00:15:11.211 }, 00:15:11.211 "claimed": false, 00:15:11.211 "zoned": false, 00:15:11.211 "supported_io_types": { 00:15:11.211 "read": true, 00:15:11.211 "write": true, 00:15:11.211 "unmap": true, 00:15:11.211 "flush": true, 00:15:11.211 "reset": true, 00:15:11.211 "nvme_admin": true, 00:15:11.211 "nvme_io": true, 00:15:11.211 "nvme_io_md": false, 00:15:11.211 "write_zeroes": true, 00:15:11.211 "zcopy": false, 00:15:11.211 "get_zone_info": false, 00:15:11.211 "zone_management": false, 00:15:11.211 "zone_append": false, 00:15:11.211 "compare": true, 00:15:11.211 "compare_and_write": true, 00:15:11.211 "abort": true, 00:15:11.211 "seek_hole": false, 00:15:11.211 "seek_data": false, 00:15:11.211 "copy": true, 00:15:11.211 "nvme_iov_md": false 00:15:11.211 }, 00:15:11.211 "memory_domains": [ 00:15:11.211 { 00:15:11.211 "dma_device_id": "system", 00:15:11.211 "dma_device_type": 1 00:15:11.211 } 00:15:11.211 ], 00:15:11.211 "driver_specific": { 00:15:11.211 "nvme": [ 00:15:11.211 { 00:15:11.211 "trid": { 00:15:11.211 "trtype": "TCP", 00:15:11.211 "adrfam": "IPv4", 00:15:11.211 "traddr": "10.0.0.2", 00:15:11.211 "trsvcid": "4420", 00:15:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:11.211 }, 00:15:11.211 "ctrlr_data": { 00:15:11.211 "cntlid": 1, 00:15:11.211 "vendor_id": "0x8086", 00:15:11.211 "model_number": "SPDK bdev Controller", 00:15:11.211 "serial_number": "SPDK0", 00:15:11.211 "firmware_revision": "24.09", 00:15:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:11.211 "oacs": { 00:15:11.211 "security": 0, 00:15:11.211 "format": 0, 00:15:11.211 "firmware": 0, 00:15:11.211 "ns_manage": 0 00:15:11.211 }, 00:15:11.211 "multi_ctrlr": true, 00:15:11.211 "ana_reporting": false 00:15:11.211 }, 00:15:11.211 "vs": { 00:15:11.211 "nvme_version": "1.3" 00:15:11.211 }, 00:15:11.211 "ns_data": { 00:15:11.211 "id": 1, 00:15:11.211 "can_share": true 00:15:11.211 } 00:15:11.211 } 00:15:11.211 ], 00:15:11.211 "mp_policy": "active_passive" 00:15:11.211 } 00:15:11.211 } 00:15:11.211 ] 00:15:11.211 00:26:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1030569 00:15:11.211 00:26:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:11.211 00:26:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:11.211 Running I/O for 10 seconds... 00:15:12.593 Latency(us) 00:15:12.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.593 Nvme0n1 : 1.00 18007.00 70.34 0.00 0.00 0.00 0.00 0.00 00:15:12.593 =================================================================================================================== 00:15:12.593 Total : 18007.00 70.34 0.00 0.00 0.00 0.00 0.00 00:15:12.593 00:15:13.164 00:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:13.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.164 Nvme0n1 : 2.00 18132.50 70.83 0.00 0.00 0.00 0.00 0.00 00:15:13.164 =================================================================================================================== 00:15:13.164 Total : 18132.50 70.83 0.00 0.00 0.00 0.00 0.00 00:15:13.164 00:15:13.424 true 00:15:13.424 00:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:13.424 00:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:13.424 00:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:13.424 00:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:13.424 00:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1030569 00:15:14.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.366 Nvme0n1 : 3.00 18184.00 71.03 0.00 0.00 0.00 0.00 0.00 00:15:14.366 =================================================================================================================== 00:15:14.367 Total : 18184.00 71.03 0.00 0.00 0.00 0.00 0.00 00:15:14.367 00:15:15.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.309 Nvme0n1 : 4.00 18229.75 71.21 0.00 0.00 0.00 0.00 0.00 00:15:15.309 =================================================================================================================== 00:15:15.309 Total : 18229.75 71.21 0.00 0.00 0.00 0.00 0.00 00:15:15.309 00:15:16.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.279 Nvme0n1 : 5.00 18257.40 71.32 0.00 0.00 0.00 0.00 0.00 00:15:16.279 =================================================================================================================== 00:15:16.279 Total : 18257.40 71.32 0.00 0.00 0.00 0.00 0.00 00:15:16.279 00:15:17.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.223 Nvme0n1 : 6.00 18257.50 71.32 0.00 0.00 0.00 0.00 0.00 00:15:17.223 =================================================================================================================== 00:15:17.223 Total : 18257.50 71.32 0.00 0.00 0.00 0.00 0.00 00:15:17.223 00:15:18.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.609 Nvme0n1 : 7.00 18280.00 71.41 0.00 0.00 0.00 0.00 0.00 00:15:18.609 =================================================================================================================== 00:15:18.609 Total : 18280.00 71.41 0.00 0.00 0.00 0.00 0.00 00:15:18.609 00:15:19.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.181 Nvme0n1 : 8.00 18290.75 71.45 0.00 0.00 0.00 0.00 0.00 00:15:19.181 =================================================================================================================== 00:15:19.181 Total : 18290.75 71.45 0.00 0.00 0.00 0.00 0.00 00:15:19.181 00:15:20.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.564 Nvme0n1 : 9.00 18306.44 71.51 0.00 0.00 0.00 0.00 0.00 00:15:20.564 =================================================================================================================== 00:15:20.564 Total : 18306.44 71.51 0.00 0.00 0.00 0.00 0.00 00:15:20.565 00:15:21.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.507 Nvme0n1 : 10.00 18312.50 71.53 0.00 0.00 0.00 0.00 0.00 00:15:21.507 =================================================================================================================== 00:15:21.507 Total : 18312.50 71.53 0.00 0.00 0.00 0.00 0.00 00:15:21.507 00:15:21.507 00:15:21.507 Latency(us) 00:15:21.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.507 Nvme0n1 : 10.01 18315.08 71.54 0.00 0.00 6986.62 4341.76 14745.60 00:15:21.507 =================================================================================================================== 00:15:21.507 Total : 18315.08 71.54 0.00 0.00 6986.62 4341.76 14745.60 00:15:21.507 0 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1030250 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1030250 ']' 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1030250 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030250 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030250' 00:15:21.507 killing process with pid 1030250 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1030250 00:15:21.507 Received shutdown signal, test time was about 10.000000 seconds 00:15:21.507 00:15:21.507 Latency(us) 00:15:21.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.507 =================================================================================================================== 00:15:21.507 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:21.507 00:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1030250 00:15:21.507 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:21.767 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:21.767 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:21.767 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1026759 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1026759 00:15:22.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1026759 Killed "${NVMF_APP[@]}" "$@" 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1032611 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1032611 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1032611 ']' 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.028 00:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:22.028 [2024-07-16 00:26:35.595317] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:22.028 [2024-07-16 00:26:35.595372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.028 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.289 [2024-07-16 00:26:35.672491] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.289 [2024-07-16 00:26:35.739438] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.289 [2024-07-16 00:26:35.739477] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.289 [2024-07-16 00:26:35.739484] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.289 [2024-07-16 00:26:35.739490] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.289 [2024-07-16 00:26:35.739496] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.289 [2024-07-16 00:26:35.739515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.860 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.860 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:22.860 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.860 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.860 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:22.860 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.860 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:23.121 [2024-07-16 00:26:36.544510] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:23.121 [2024-07-16 00:26:36.544605] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:23.121 [2024-07-16 00:26:36.544634] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 517a5fa5-7f8c-4201-8f23-61cb8bc47da0 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=517a5fa5-7f8c-4201-8f23-61cb8bc47da0 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:23.121 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 517a5fa5-7f8c-4201-8f23-61cb8bc47da0 -t 2000 00:15:23.382 [ 00:15:23.382 { 00:15:23.382 "name": "517a5fa5-7f8c-4201-8f23-61cb8bc47da0", 00:15:23.382 "aliases": [ 00:15:23.382 "lvs/lvol" 00:15:23.382 ], 00:15:23.382 "product_name": "Logical Volume", 00:15:23.382 "block_size": 4096, 00:15:23.382 "num_blocks": 38912, 00:15:23.382 "uuid": "517a5fa5-7f8c-4201-8f23-61cb8bc47da0", 00:15:23.382 "assigned_rate_limits": { 00:15:23.382 "rw_ios_per_sec": 0, 00:15:23.382 "rw_mbytes_per_sec": 0, 00:15:23.382 "r_mbytes_per_sec": 0, 00:15:23.382 "w_mbytes_per_sec": 0 00:15:23.382 }, 00:15:23.382 "claimed": false, 00:15:23.382 "zoned": false, 00:15:23.382 "supported_io_types": { 00:15:23.382 "read": true, 00:15:23.382 "write": true, 00:15:23.382 "unmap": true, 00:15:23.382 "flush": false, 00:15:23.382 "reset": true, 00:15:23.382 "nvme_admin": false, 00:15:23.382 "nvme_io": false, 00:15:23.382 "nvme_io_md": false, 00:15:23.382 "write_zeroes": true, 00:15:23.382 "zcopy": false, 00:15:23.382 "get_zone_info": false, 00:15:23.382 "zone_management": false, 00:15:23.382 "zone_append": false, 00:15:23.382 "compare": false, 00:15:23.382 "compare_and_write": false, 00:15:23.382 "abort": false, 00:15:23.382 "seek_hole": true, 00:15:23.382 "seek_data": true, 00:15:23.382 "copy": false, 00:15:23.382 "nvme_iov_md": false 00:15:23.382 }, 00:15:23.382 "driver_specific": { 00:15:23.382 "lvol": { 00:15:23.382 "lvol_store_uuid": "bb54f36b-15e2-4840-a83d-998b8efb3ace", 00:15:23.382 "base_bdev": "aio_bdev", 00:15:23.382 "thin_provision": false, 00:15:23.382 "num_allocated_clusters": 38, 00:15:23.382 "snapshot": false, 00:15:23.382 "clone": false, 00:15:23.382 "esnap_clone": false 00:15:23.382 } 00:15:23.382 } 00:15:23.382 } 00:15:23.382 ] 00:15:23.382 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:23.382 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:23.382 00:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:23.643 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:23.643 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:23.643 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:23.643 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:23.643 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:23.905 [2024-07-16 00:26:37.316357] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:23.905 request: 00:15:23.905 { 00:15:23.905 "uuid": "bb54f36b-15e2-4840-a83d-998b8efb3ace", 00:15:23.905 "method": "bdev_lvol_get_lvstores", 00:15:23.905 "req_id": 1 00:15:23.905 } 00:15:23.905 Got JSON-RPC error response 00:15:23.905 response: 00:15:23.905 { 00:15:23.905 "code": -19, 00:15:23.905 "message": "No such device" 00:15:23.905 } 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.905 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:24.167 aio_bdev 00:15:24.167 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 517a5fa5-7f8c-4201-8f23-61cb8bc47da0 00:15:24.167 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=517a5fa5-7f8c-4201-8f23-61cb8bc47da0 00:15:24.167 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:24.167 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:24.167 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:24.167 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:24.167 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:24.428 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 517a5fa5-7f8c-4201-8f23-61cb8bc47da0 -t 2000 00:15:24.428 [ 00:15:24.428 { 00:15:24.428 "name": "517a5fa5-7f8c-4201-8f23-61cb8bc47da0", 00:15:24.428 "aliases": [ 00:15:24.428 "lvs/lvol" 00:15:24.428 ], 00:15:24.428 "product_name": "Logical Volume", 00:15:24.428 "block_size": 4096, 00:15:24.428 "num_blocks": 38912, 00:15:24.428 "uuid": "517a5fa5-7f8c-4201-8f23-61cb8bc47da0", 00:15:24.428 "assigned_rate_limits": { 00:15:24.428 "rw_ios_per_sec": 0, 00:15:24.428 "rw_mbytes_per_sec": 0, 00:15:24.428 "r_mbytes_per_sec": 0, 00:15:24.428 "w_mbytes_per_sec": 0 00:15:24.428 }, 00:15:24.428 "claimed": false, 00:15:24.428 "zoned": false, 00:15:24.428 "supported_io_types": { 00:15:24.428 "read": true, 00:15:24.428 "write": true, 00:15:24.428 "unmap": true, 00:15:24.428 "flush": false, 00:15:24.428 "reset": true, 00:15:24.428 "nvme_admin": false, 00:15:24.428 "nvme_io": false, 00:15:24.428 "nvme_io_md": false, 00:15:24.428 "write_zeroes": true, 00:15:24.428 "zcopy": false, 00:15:24.428 "get_zone_info": false, 00:15:24.428 "zone_management": false, 00:15:24.428 "zone_append": false, 00:15:24.428 "compare": false, 00:15:24.428 "compare_and_write": false, 00:15:24.428 "abort": false, 00:15:24.428 "seek_hole": true, 00:15:24.428 "seek_data": true, 00:15:24.428 "copy": false, 00:15:24.428 "nvme_iov_md": false 00:15:24.428 }, 00:15:24.428 "driver_specific": { 00:15:24.428 "lvol": { 00:15:24.428 "lvol_store_uuid": "bb54f36b-15e2-4840-a83d-998b8efb3ace", 00:15:24.428 "base_bdev": "aio_bdev", 00:15:24.428 "thin_provision": false, 00:15:24.428 "num_allocated_clusters": 38, 00:15:24.428 "snapshot": false, 00:15:24.428 "clone": false, 00:15:24.428 "esnap_clone": false 00:15:24.428 } 00:15:24.428 } 00:15:24.428 } 00:15:24.428 ] 00:15:24.428 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:24.428 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:24.428 00:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:24.689 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:24.689 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:24.689 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:24.689 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:24.689 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 517a5fa5-7f8c-4201-8f23-61cb8bc47da0 00:15:24.950 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb54f36b-15e2-4840-a83d-998b8efb3ace 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:25.211 00:15:25.211 real 0m16.931s 00:15:25.211 user 0m44.416s 00:15:25.211 sys 0m2.817s 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:25.211 ************************************ 00:15:25.211 END TEST lvs_grow_dirty 00:15:25.211 ************************************ 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:25.211 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:25.211 nvmf_trace.0 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.472 rmmod nvme_tcp 00:15:25.472 rmmod nvme_fabrics 00:15:25.472 rmmod nvme_keyring 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1032611 ']' 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1032611 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1032611 ']' 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1032611 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032611 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032611' 00:15:25.472 killing process with pid 1032611 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1032611 00:15:25.472 00:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1032611 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.734 00:26:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.649 00:26:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.649 00:15:27.649 real 0m44.189s 00:15:27.649 user 1m5.582s 00:15:27.649 sys 0m10.602s 00:15:27.649 00:26:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.649 00:26:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 ************************************ 00:15:27.649 END TEST nvmf_lvs_grow 00:15:27.649 ************************************ 00:15:27.649 00:26:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:27.649 00:26:41 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:27.649 00:26:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.649 00:26:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.649 00:26:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 ************************************ 00:15:27.649 START TEST nvmf_bdev_io_wait 00:15:27.649 ************************************ 00:15:27.649 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:27.909 * Looking for test storage... 00:15:27.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.909 00:26:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:36.053 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:36.053 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:36.053 Found net devices under 0000:31:00.0: cvl_0_0 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:36.053 Found net devices under 0000:31:00.1: cvl_0_1 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:15:36.053 00:15:36.053 --- 10.0.0.2 ping statistics --- 00:15:36.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.053 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:15:36.053 00:15:36.053 --- 10.0.0.1 ping statistics --- 00:15:36.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.053 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.053 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1038020 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1038020 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1038020 ']' 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.054 00:26:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:36.314 [2024-07-16 00:26:49.702422] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:36.314 [2024-07-16 00:26:49.702485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.314 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.314 [2024-07-16 00:26:49.784246] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.314 [2024-07-16 00:26:49.860139] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.314 [2024-07-16 00:26:49.860178] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.314 [2024-07-16 00:26:49.860186] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.314 [2024-07-16 00:26:49.860193] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.314 [2024-07-16 00:26:49.860199] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.314 [2024-07-16 00:26:49.860281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.314 [2024-07-16 00:26:49.860342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.314 [2024-07-16 00:26:49.860509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.314 [2024-07-16 00:26:49.860509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.883 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.883 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:36.883 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.883 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.883 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 [2024-07-16 00:26:50.584974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 Malloc0 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 [2024-07-16 00:26:50.654621] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1038351 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1038353 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:37.144 { 00:15:37.144 "params": { 00:15:37.144 "name": "Nvme$subsystem", 00:15:37.144 "trtype": "$TEST_TRANSPORT", 00:15:37.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.144 "adrfam": "ipv4", 00:15:37.144 "trsvcid": "$NVMF_PORT", 00:15:37.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.144 "hdgst": ${hdgst:-false}, 00:15:37.144 "ddgst": ${ddgst:-false} 00:15:37.144 }, 00:15:37.144 "method": "bdev_nvme_attach_controller" 00:15:37.144 } 00:15:37.144 EOF 00:15:37.144 )") 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1038355 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1038358 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:37.144 { 00:15:37.144 "params": { 00:15:37.144 "name": "Nvme$subsystem", 00:15:37.144 "trtype": "$TEST_TRANSPORT", 00:15:37.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.144 "adrfam": "ipv4", 00:15:37.144 "trsvcid": "$NVMF_PORT", 00:15:37.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.144 "hdgst": ${hdgst:-false}, 00:15:37.144 "ddgst": ${ddgst:-false} 00:15:37.144 }, 00:15:37.144 "method": "bdev_nvme_attach_controller" 00:15:37.144 } 00:15:37.144 EOF 00:15:37.144 )") 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:37.144 { 00:15:37.144 "params": { 00:15:37.144 "name": "Nvme$subsystem", 00:15:37.144 "trtype": "$TEST_TRANSPORT", 00:15:37.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.144 "adrfam": "ipv4", 00:15:37.144 "trsvcid": "$NVMF_PORT", 00:15:37.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.144 "hdgst": ${hdgst:-false}, 00:15:37.144 "ddgst": ${ddgst:-false} 00:15:37.144 }, 00:15:37.144 "method": "bdev_nvme_attach_controller" 00:15:37.144 } 00:15:37.144 EOF 00:15:37.144 )") 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:37.144 { 00:15:37.144 "params": { 00:15:37.144 "name": "Nvme$subsystem", 00:15:37.144 "trtype": "$TEST_TRANSPORT", 00:15:37.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.144 "adrfam": "ipv4", 00:15:37.144 "trsvcid": "$NVMF_PORT", 00:15:37.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.144 "hdgst": ${hdgst:-false}, 00:15:37.144 "ddgst": ${ddgst:-false} 00:15:37.144 }, 00:15:37.144 "method": "bdev_nvme_attach_controller" 00:15:37.144 } 00:15:37.144 EOF 00:15:37.144 )") 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1038351 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:37.144 "params": { 00:15:37.144 "name": "Nvme1", 00:15:37.144 "trtype": "tcp", 00:15:37.144 "traddr": "10.0.0.2", 00:15:37.144 "adrfam": "ipv4", 00:15:37.144 "trsvcid": "4420", 00:15:37.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.144 "hdgst": false, 00:15:37.144 "ddgst": false 00:15:37.144 }, 00:15:37.144 "method": "bdev_nvme_attach_controller" 00:15:37.144 }' 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:37.144 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:37.144 "params": { 00:15:37.144 "name": "Nvme1", 00:15:37.144 "trtype": "tcp", 00:15:37.144 "traddr": "10.0.0.2", 00:15:37.144 "adrfam": "ipv4", 00:15:37.144 "trsvcid": "4420", 00:15:37.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.145 "hdgst": false, 00:15:37.145 "ddgst": false 00:15:37.145 }, 00:15:37.145 "method": "bdev_nvme_attach_controller" 00:15:37.145 }' 00:15:37.145 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:37.145 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:37.145 "params": { 00:15:37.145 "name": "Nvme1", 00:15:37.145 "trtype": "tcp", 00:15:37.145 "traddr": "10.0.0.2", 00:15:37.145 "adrfam": "ipv4", 00:15:37.145 "trsvcid": "4420", 00:15:37.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.145 "hdgst": false, 00:15:37.145 "ddgst": false 00:15:37.145 }, 00:15:37.145 "method": "bdev_nvme_attach_controller" 00:15:37.145 }' 00:15:37.145 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:37.145 00:26:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:37.145 "params": { 00:15:37.145 "name": "Nvme1", 00:15:37.145 "trtype": "tcp", 00:15:37.145 "traddr": "10.0.0.2", 00:15:37.145 "adrfam": "ipv4", 00:15:37.145 "trsvcid": "4420", 00:15:37.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.145 "hdgst": false, 00:15:37.145 "ddgst": false 00:15:37.145 }, 00:15:37.145 "method": "bdev_nvme_attach_controller" 00:15:37.145 }' 00:15:37.145 [2024-07-16 00:26:50.707808] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:37.145 [2024-07-16 00:26:50.707861] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:37.145 [2024-07-16 00:26:50.709128] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:37.145 [2024-07-16 00:26:50.709176] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:37.145 [2024-07-16 00:26:50.711897] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:37.145 [2024-07-16 00:26:50.711942] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:37.145 [2024-07-16 00:26:50.713811] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:37.145 [2024-07-16 00:26:50.713855] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:37.145 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.404 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.404 [2024-07-16 00:26:50.866331] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.404 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.404 [2024-07-16 00:26:50.917753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:37.404 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.404 [2024-07-16 00:26:50.920820] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.404 [2024-07-16 00:26:50.969795] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.404 [2024-07-16 00:26:50.971987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:37.404 [2024-07-16 00:26:51.016269] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.404 [2024-07-16 00:26:51.019985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:37.664 [2024-07-16 00:26:51.066178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:37.664 Running I/O for 1 seconds... 00:15:37.664 Running I/O for 1 seconds... 00:15:37.664 Running I/O for 1 seconds... 00:15:37.923 Running I/O for 1 seconds... 00:15:38.861 00:15:38.861 Latency(us) 00:15:38.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.861 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:38.861 Nvme1n1 : 1.01 11807.24 46.12 0.00 0.00 10786.67 5079.04 14090.24 00:15:38.861 =================================================================================================================== 00:15:38.861 Total : 11807.24 46.12 0.00 0.00 10786.67 5079.04 14090.24 00:15:38.861 00:15:38.861 Latency(us) 00:15:38.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.861 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:38.861 Nvme1n1 : 1.00 185995.55 726.55 0.00 0.00 685.62 269.65 808.96 00:15:38.861 =================================================================================================================== 00:15:38.861 Total : 185995.55 726.55 0.00 0.00 685.62 269.65 808.96 00:15:38.861 00:15:38.861 Latency(us) 00:15:38.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.862 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:38.862 Nvme1n1 : 1.00 11257.09 43.97 0.00 0.00 11341.64 4150.61 23046.83 00:15:38.862 =================================================================================================================== 00:15:38.862 Total : 11257.09 43.97 0.00 0.00 11341.64 4150.61 23046.83 00:15:38.862 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1038353 00:15:38.862 00:15:38.862 Latency(us) 00:15:38.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.862 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:38.862 Nvme1n1 : 1.00 14339.81 56.01 0.00 0.00 8898.67 4805.97 18786.99 00:15:38.862 =================================================================================================================== 00:15:38.862 Total : 14339.81 56.01 0.00 0.00 8898.67 4805.97 18786.99 00:15:38.862 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1038355 00:15:38.862 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1038358 00:15:38.862 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.862 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.121 rmmod nvme_tcp 00:15:39.121 rmmod nvme_fabrics 00:15:39.121 rmmod nvme_keyring 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1038020 ']' 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1038020 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1038020 ']' 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1038020 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1038020 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1038020' 00:15:39.121 killing process with pid 1038020 00:15:39.121 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1038020 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1038020 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.122 00:26:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.667 00:26:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.667 00:15:41.667 real 0m13.548s 00:15:41.667 user 0m18.946s 00:15:41.667 sys 0m7.613s 00:15:41.667 00:26:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.667 00:26:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.667 ************************************ 00:15:41.667 END TEST nvmf_bdev_io_wait 00:15:41.667 ************************************ 00:15:41.667 00:26:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:41.667 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:41.667 00:26:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:41.667 00:26:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.667 00:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.667 ************************************ 00:15:41.667 START TEST nvmf_queue_depth 00:15:41.667 ************************************ 00:15:41.667 00:26:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:41.667 * Looking for test storage... 00:15:41.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.667 00:26:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.668 00:26:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:49.818 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:49.818 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:49.818 Found net devices under 0000:31:00.0: cvl_0_0 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:49.818 Found net devices under 0000:31:00.1: cvl_0_1 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.818 00:27:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:15:49.818 00:15:49.818 --- 10.0.0.2 ping statistics --- 00:15:49.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.818 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:15:49.818 00:15:49.818 --- 10.0.0.1 ping statistics --- 00:15:49.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.818 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1043501 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1043501 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1043501 ']' 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.818 00:27:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.818 [2024-07-16 00:27:03.345139] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:49.818 [2024-07-16 00:27:03.345188] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.818 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.818 [2024-07-16 00:27:03.423564] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.081 [2024-07-16 00:27:03.490931] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.081 [2024-07-16 00:27:03.490969] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.081 [2024-07-16 00:27:03.490976] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.081 [2024-07-16 00:27:03.490983] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.082 [2024-07-16 00:27:03.490988] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.082 [2024-07-16 00:27:03.491014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.731 [2024-07-16 00:27:04.193567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.731 Malloc0 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.731 [2024-07-16 00:27:04.258318] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1043837 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1043837 /var/tmp/bdevperf.sock 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1043837 ']' 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.731 00:27:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.731 [2024-07-16 00:27:04.312350] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:50.731 [2024-07-16 00:27:04.312398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043837 ] 00:15:50.731 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.999 [2024-07-16 00:27:04.377027] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.000 [2024-07-16 00:27:04.441166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.569 00:27:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.569 00:27:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:51.569 00:27:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.569 00:27:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.569 00:27:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.829 NVMe0n1 00:15:51.829 00:27:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.829 00:27:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.829 Running I/O for 10 seconds... 00:16:01.824 00:16:01.824 Latency(us) 00:16:01.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.824 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:01.824 Verification LBA range: start 0x0 length 0x4000 00:16:01.824 NVMe0n1 : 10.04 11630.66 45.43 0.00 0.00 87761.47 2689.71 69905.07 00:16:01.824 =================================================================================================================== 00:16:01.824 Total : 11630.66 45.43 0.00 0.00 87761.47 2689.71 69905.07 00:16:01.824 0 00:16:01.824 00:27:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1043837 00:16:01.824 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1043837 ']' 00:16:01.824 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1043837 00:16:01.825 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:01.825 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.825 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043837 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043837' 00:16:02.085 killing process with pid 1043837 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1043837 00:16:02.085 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.085 00:16:02.085 Latency(us) 00:16:02.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.085 =================================================================================================================== 00:16:02.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1043837 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.085 rmmod nvme_tcp 00:16:02.085 rmmod nvme_fabrics 00:16:02.085 rmmod nvme_keyring 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1043501 ']' 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1043501 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1043501 ']' 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1043501 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.085 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043501 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043501' 00:16:02.344 killing process with pid 1043501 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1043501 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1043501 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.344 00:27:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.888 00:27:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.888 00:16:04.888 real 0m23.030s 00:16:04.888 user 0m25.983s 00:16:04.888 sys 0m7.153s 00:16:04.888 00:27:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.888 00:27:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:04.888 ************************************ 00:16:04.888 END TEST nvmf_queue_depth 00:16:04.888 ************************************ 00:16:04.888 00:27:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.888 00:27:17 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:04.888 00:27:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.888 00:27:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.888 00:27:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.888 ************************************ 00:16:04.888 START TEST nvmf_target_multipath 00:16:04.888 ************************************ 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:04.888 * Looking for test storage... 00:16:04.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.888 00:27:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:13.026 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:13.026 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:13.026 Found net devices under 0000:31:00.0: cvl_0_0 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:13.026 Found net devices under 0000:31:00.1: cvl_0_1 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:13.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:16:13.026 00:16:13.026 --- 10.0.0.2 ping statistics --- 00:16:13.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.026 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:16:13.026 00:16:13.026 --- 10.0.0.1 ping statistics --- 00:16:13.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.026 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:13.026 only one NIC for nvmf test 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:13.026 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.027 rmmod nvme_tcp 00:16:13.027 rmmod nvme_fabrics 00:16:13.027 rmmod nvme_keyring 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.027 00:27:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.938 00:27:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.198 00:27:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:15.198 00:16:15.198 real 0m10.575s 00:16:15.198 user 0m2.303s 00:16:15.198 sys 0m6.179s 00:16:15.199 00:27:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.199 00:27:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:15.199 ************************************ 00:16:15.199 END TEST nvmf_target_multipath 00:16:15.199 ************************************ 00:16:15.199 00:27:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:15.199 00:27:28 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:15.199 00:27:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.199 00:27:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.199 00:27:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.199 ************************************ 00:16:15.199 START TEST nvmf_zcopy 00:16:15.199 ************************************ 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:15.199 * Looking for test storage... 00:16:15.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:15.199 00:27:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:23.337 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:23.337 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.337 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:23.338 Found net devices under 0000:31:00.0: cvl_0_0 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:23.338 Found net devices under 0000:31:00.1: cvl_0_1 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.338 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:23.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:16:23.599 00:16:23.599 --- 10.0.0.2 ping statistics --- 00:16:23.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.599 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:16:23.599 00:16:23.599 --- 10.0.0.1 ping statistics --- 00:16:23.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.599 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.599 00:27:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1055725 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1055725 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1055725 ']' 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.599 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.599 [2024-07-16 00:27:37.093444] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:23.600 [2024-07-16 00:27:37.093509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.600 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.600 [2024-07-16 00:27:37.188693] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.861 [2024-07-16 00:27:37.281810] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.861 [2024-07-16 00:27:37.281870] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.861 [2024-07-16 00:27:37.281878] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.861 [2024-07-16 00:27:37.281885] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.861 [2024-07-16 00:27:37.281891] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.861 [2024-07-16 00:27:37.281929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:24.432 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 [2024-07-16 00:27:37.917148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 [2024-07-16 00:27:37.941413] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 malloc0 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:24.433 { 00:16:24.433 "params": { 00:16:24.433 "name": "Nvme$subsystem", 00:16:24.433 "trtype": "$TEST_TRANSPORT", 00:16:24.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.433 "adrfam": "ipv4", 00:16:24.433 "trsvcid": "$NVMF_PORT", 00:16:24.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.433 "hdgst": ${hdgst:-false}, 00:16:24.433 "ddgst": ${ddgst:-false} 00:16:24.433 }, 00:16:24.433 "method": "bdev_nvme_attach_controller" 00:16:24.433 } 00:16:24.433 EOF 00:16:24.433 )") 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:24.433 00:27:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:24.433 00:27:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:24.433 00:27:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:24.433 "params": { 00:16:24.433 "name": "Nvme1", 00:16:24.433 "trtype": "tcp", 00:16:24.433 "traddr": "10.0.0.2", 00:16:24.433 "adrfam": "ipv4", 00:16:24.433 "trsvcid": "4420", 00:16:24.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.433 "hdgst": false, 00:16:24.433 "ddgst": false 00:16:24.433 }, 00:16:24.433 "method": "bdev_nvme_attach_controller" 00:16:24.433 }' 00:16:24.433 [2024-07-16 00:27:38.039481] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:24.433 [2024-07-16 00:27:38.039543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056009 ] 00:16:24.693 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.693 [2024-07-16 00:27:38.112504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.693 [2024-07-16 00:27:38.186259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.953 Running I/O for 10 seconds... 00:16:34.951 00:16:34.951 Latency(us) 00:16:34.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.951 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:34.951 Verification LBA range: start 0x0 length 0x1000 00:16:34.951 Nvme1n1 : 10.01 9429.65 73.67 0.00 0.00 13523.21 2362.03 28617.39 00:16:34.951 =================================================================================================================== 00:16:34.951 Total : 9429.65 73.67 0.00 0.00 13523.21 2362.03 28617.39 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1058014 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:35.211 { 00:16:35.211 "params": { 00:16:35.211 "name": "Nvme$subsystem", 00:16:35.211 "trtype": "$TEST_TRANSPORT", 00:16:35.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.211 "adrfam": "ipv4", 00:16:35.211 "trsvcid": "$NVMF_PORT", 00:16:35.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.211 "hdgst": ${hdgst:-false}, 00:16:35.211 "ddgst": ${ddgst:-false} 00:16:35.211 }, 00:16:35.211 "method": "bdev_nvme_attach_controller" 00:16:35.211 } 00:16:35.211 EOF 00:16:35.211 )") 00:16:35.211 [2024-07-16 00:27:48.622117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.622149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:35.211 00:27:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:35.211 "params": { 00:16:35.211 "name": "Nvme1", 00:16:35.211 "trtype": "tcp", 00:16:35.211 "traddr": "10.0.0.2", 00:16:35.211 "adrfam": "ipv4", 00:16:35.211 "trsvcid": "4420", 00:16:35.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.211 "hdgst": false, 00:16:35.211 "ddgst": false 00:16:35.211 }, 00:16:35.211 "method": "bdev_nvme_attach_controller" 00:16:35.211 }' 00:16:35.211 [2024-07-16 00:27:48.634105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.634115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.646132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.646141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.658162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.658170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.665630] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:35.211 [2024-07-16 00:27:48.665679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058014 ] 00:16:35.211 [2024-07-16 00:27:48.670194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.670202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.682224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.682235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.211 [2024-07-16 00:27:48.694259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.694266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.706288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.706296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.718320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.718327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.729897] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.211 [2024-07-16 00:27:48.730350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.730357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.742383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.742391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.754414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.754423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.766446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.766458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.778474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.778489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.790505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.790514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.793887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.211 [2024-07-16 00:27:48.802534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.802542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.814572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.814586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.826601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.826610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.211 [2024-07-16 00:27:48.838628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.211 [2024-07-16 00:27:48.838636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.850659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.850667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.862690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.862697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.874736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.874754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.886755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.886766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.898787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.898797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.910818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.910828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.922849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.922857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.934883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.934898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 Running I/O for 5 seconds... 00:16:35.473 [2024-07-16 00:27:48.946919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.946929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.963148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.963166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.976153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.976171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:48.988761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:48.988779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.001322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.001343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.013997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.014013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.026281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.026298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.038835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.038851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.052262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.052279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.064892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.064908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.078073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.078089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.473 [2024-07-16 00:27:49.091809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.473 [2024-07-16 00:27:49.091825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.732 [2024-07-16 00:27:49.105340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.732 [2024-07-16 00:27:49.105357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.732 [2024-07-16 00:27:49.118442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.732 [2024-07-16 00:27:49.118458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.732 [2024-07-16 00:27:49.131769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.732 [2024-07-16 00:27:49.131786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.732 [2024-07-16 00:27:49.139676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.732 [2024-07-16 00:27:49.139692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.732 [2024-07-16 00:27:49.148253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.732 [2024-07-16 00:27:49.148269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.157441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.157456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.165985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.166001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.175072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.175087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.183434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.183449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.192440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.192456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.201502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.201517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.209844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.209866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.218595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.218610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.227331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.227346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.235995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.236011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.244949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.244964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.253341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.253357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.261872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.261887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.270957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.270974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.279469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.279485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.288123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.288139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.297143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.297159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.306155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.306170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.314949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.314964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.323399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.323414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.332260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.332276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.341246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.341261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.350127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.350142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.733 [2024-07-16 00:27:49.358805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.733 [2024-07-16 00:27:49.358821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.367589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.367605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.376533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.376552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.384930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.384946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.393270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.393286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.402080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.402096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.410939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.410955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.419225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.419245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.428058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.428074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.436871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.436887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.445564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.445579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.453882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.453898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.462660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.462676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.471393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.471409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.480555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.480570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.488934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.488950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.497226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.497248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.505722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.505738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.514168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.514183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.522632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.522649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.531360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.531375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.540121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.540137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.549002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.549018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.557464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.993 [2024-07-16 00:27:49.557480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.993 [2024-07-16 00:27:49.566145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.994 [2024-07-16 00:27:49.566161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.994 [2024-07-16 00:27:49.574785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.994 [2024-07-16 00:27:49.574800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.994 [2024-07-16 00:27:49.583187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.994 [2024-07-16 00:27:49.583203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.994 [2024-07-16 00:27:49.591827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.994 [2024-07-16 00:27:49.591843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.994 [2024-07-16 00:27:49.600536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.994 [2024-07-16 00:27:49.600551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.994 [2024-07-16 00:27:49.609591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.994 [2024-07-16 00:27:49.609607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.994 [2024-07-16 00:27:49.617999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.994 [2024-07-16 00:27:49.618014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.626998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.627014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.635988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.636003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.644835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.644850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.653454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.653469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.662176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.662190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.670598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.670613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.678950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.678964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.688010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.688025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.696217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.696236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.705029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.705044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.713357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.713372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.722228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.722246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.731285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.731301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.740153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.740169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.748782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.748797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.757366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.757381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.766365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.766381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.775373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.775388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.784376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.784391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.793358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.793373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.801595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.801610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.810459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.810474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.819329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.819344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.827900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.827914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.836496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.836510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.845040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.845055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.853823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.853838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.862626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.862641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.871668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.871683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.254 [2024-07-16 00:27:49.880717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.254 [2024-07-16 00:27:49.880732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.889766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.889782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.897586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.897602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.906599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.906615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.915081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.915096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.923973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.923988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.932501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.932516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.941341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.941356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.949881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.949896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.958446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.958462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.967018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.967033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.975852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.975867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.984142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.984157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:49.992310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:49.992325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.000959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.000974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.010501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.010516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.018472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.018486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.026448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.026464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.035292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.035307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.044076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.044092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.051945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.051959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.061256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.061271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.069185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.069200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.078237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.078252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.086475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.086491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.095551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.095567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.103892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.103907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.112671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.514 [2024-07-16 00:27:50.112686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.514 [2024-07-16 00:27:50.121609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.515 [2024-07-16 00:27:50.121624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.515 [2024-07-16 00:27:50.129988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.515 [2024-07-16 00:27:50.130003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.515 [2024-07-16 00:27:50.138981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.515 [2024-07-16 00:27:50.138997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.774 [2024-07-16 00:27:50.147187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.774 [2024-07-16 00:27:50.147204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.774 [2024-07-16 00:27:50.155458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.774 [2024-07-16 00:27:50.155473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.774 [2024-07-16 00:27:50.163602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.163617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.172313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.172328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.181142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.181157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.189787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.189807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.198303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.198318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.206757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.206772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.215633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.215648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.224614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.224629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.232836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.232851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.241540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.241554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.249746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.249761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.258293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.258308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.266529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.266544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.274941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.274956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.283903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.283918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.292341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.292356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.300423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.300438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.309408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.309422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.318188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.318203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.327096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.327111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.335283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.335298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.344204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.344218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.352676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.352695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.361708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.361722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.370212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.370227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.379032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.379048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.388200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.388215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.396842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.396857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.775 [2024-07-16 00:27:50.405595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.775 [2024-07-16 00:27:50.405610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.414591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.414606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.423595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.423610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.431852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.431867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.440805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.440819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.448978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.448993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.457866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.457880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.466948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.466963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.475823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.475838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.484260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.484275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.492990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.493005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.501844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.501858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.509836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.509851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.035 [2024-07-16 00:27:50.518854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.035 [2024-07-16 00:27:50.518873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.527514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.527529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.536051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.536066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.544726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.544741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.553155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.553170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.561990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.562005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.570921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.570936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.580176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.580190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.589156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.589171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.598281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.598296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.606707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.606722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.615365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.615380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.623881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.623896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.632302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.632318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.641462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.641477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.649780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.649795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.036 [2024-07-16 00:27:50.658630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.036 [2024-07-16 00:27:50.658645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.667674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.667690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.676139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.676154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.684701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.684720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.693092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.693108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.701490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.701506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.709700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.709715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.718365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.718380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.727492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.727507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.735833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.735850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.744401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.744417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.753493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.753508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.762254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.762269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.771144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.771160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.779664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.779680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.788662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.788678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.797289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.797305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.806257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.296 [2024-07-16 00:27:50.806273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.296 [2024-07-16 00:27:50.814874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.814890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.823991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.824007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.832968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.832984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.841890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.841905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.850586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.850601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.859136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.859152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.867591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.867606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.876327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.876342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.885098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.885114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.893618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.893634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.902034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.902050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.910944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.910959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.297 [2024-07-16 00:27:50.919538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.297 [2024-07-16 00:27:50.919554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.928303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.928318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.937036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.937052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.946051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.946067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.954744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.954759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.963206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.963222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.972122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.972137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.980845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.980861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.988984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.988999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:50.997250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:50.997265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.006011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.006026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.014920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.014935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.024009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.024024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.032983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.032998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.042006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.042021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.050612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.050627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.059677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.059693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.068523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.068538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.077632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.077647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.086748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.086764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.094565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.094580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.103844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.103859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.112838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.112853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.121848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.121863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.129967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.129983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.138807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.138823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.147842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.147857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.156295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.156310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.165524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.165540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.174632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.558 [2024-07-16 00:27:51.174647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.558 [2024-07-16 00:27:51.183645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.559 [2024-07-16 00:27:51.183661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.192901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.192917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.202021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.202037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.210054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.210070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.219011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.219026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.227562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.227577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.236009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.236024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.244800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.244815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.253178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.253194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.262157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.262173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.271210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.271226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.279812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.279828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.288576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.288592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.297099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.297115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.305822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.305838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.314024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.314040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.322534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.322549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.331342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.331357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.340128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.340144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.348891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.348907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.356726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.356742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.365660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.819 [2024-07-16 00:27:51.365676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.819 [2024-07-16 00:27:51.374539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.374554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.382696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.382711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.391684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.391699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.399961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.399976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.408863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.408878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.417611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.417626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.426867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.426882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.435277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.435292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.820 [2024-07-16 00:27:51.443802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.820 [2024-07-16 00:27:51.443816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.452946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.452960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.461489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.461504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.470094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.470109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.478478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.478493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.487742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.487757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.496177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.496192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.504508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.504526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.513057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.513072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.522024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.522039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.530280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.530295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.538589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.538604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.547203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.547217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.555860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.555875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.564039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.564054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.572916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.572931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.581817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.581832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.590902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.590916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.599735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.599751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.608727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.608742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.617016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.617031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.625338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.625353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.634407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.634422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.643487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.643502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.652242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.652258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.660425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.660440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.669521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.669543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.678083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.678098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.686897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.686912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.695845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.695860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.081 [2024-07-16 00:27:51.704123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.081 [2024-07-16 00:27:51.704138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.712802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.712818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.721170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.721185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.730166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.730181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.737788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.737803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.746771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.746786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.755193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.755208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.764041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.764056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.772194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.772209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.780509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.780524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.789506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.789521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.798439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.798453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.807195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.807210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.816128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.816143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.824149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.824164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.832938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.832957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.841690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.841705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.850363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.850378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.858953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.858968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.867668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.867683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.876679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.876694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.885421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.885436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.894164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.894179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.902557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.902572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.911210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.911226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.920129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.920144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.928237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.928252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.936942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.936958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.945440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.945457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.954170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.954185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.962565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.962580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.342 [2024-07-16 00:27:51.971200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.342 [2024-07-16 00:27:51.971215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:51.979735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:51.979750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:51.987827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:51.987843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:51.996606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:51.996625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.005264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.005279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.014016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.014030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.022404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.022419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.031696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.031711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.040146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.040161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.048586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.048601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.056966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.056980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.065899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.065914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.074995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.075010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.082841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.082856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.092213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.092227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.100496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.100512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.109209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.109224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.118002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.118018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.126560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.126576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.135942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.135957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.145019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.145034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.153753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.153768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.162607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.162622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.171242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.171257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.180320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.180335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.189418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.189434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.197945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.197960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.206987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.207002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.214815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.214830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.224271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.224286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.604 [2024-07-16 00:27:52.232446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.604 [2024-07-16 00:27:52.232461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.240971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.240986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.249702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.249717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.258648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.258663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.267531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.267546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.276492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.276507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.284843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.284858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.293981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.293996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.302187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.302202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.311097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.311111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.320300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.320315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.328430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.328444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.337591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.337606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.346369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.346384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.355293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.355308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.363899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.363914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.371568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.865 [2024-07-16 00:27:52.371584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.865 [2024-07-16 00:27:52.380782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.380798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.389096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.389111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.398032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.398047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.407065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.407081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.415564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.415579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.424649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.424665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.432869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.432884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.441163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.441179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.449612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.449628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.458176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.458191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.466551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.466567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.475574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.475589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.484053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.484069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.866 [2024-07-16 00:27:52.492822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.866 [2024-07-16 00:27:52.492837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.501625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.501641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.510607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.510623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.519695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.519710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.528659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.528674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.537158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.537173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.546241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.546256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.554519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.554534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.563176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.563192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.571098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.571114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.580141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.580157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.588455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.588471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.597183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.597198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.605360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.605376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.614038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.614054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.622599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.622615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.631162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.631178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.639293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.639308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.648153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.648169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.657202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.657217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.665753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.665770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.674025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.674040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.682164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.682179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.695362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.695377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.703136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.703151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.711656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.711671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.720248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.720264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.729141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.729157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.738198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.738214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.746794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.746809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.127 [2024-07-16 00:27:52.755380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.127 [2024-07-16 00:27:52.755396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.764102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.764118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.773065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.773081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.781802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.781817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.790227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.790247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.799157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.799173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.807855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.807870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.816365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.816385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.825031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.825046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.833430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.833446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.842103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.842119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.850280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.850296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.859006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.859021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.387 [2024-07-16 00:27:52.867727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.387 [2024-07-16 00:27:52.867743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.876169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.876184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.884947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.884963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.893696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.893712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.902275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.902290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.911034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.911049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.919837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.919853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.928522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.928538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.937348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.937364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.945709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.945724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.953535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.953551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.962902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.962917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.971244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.971260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.979859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.979878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.988642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.988657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:52.996800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:52.996816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:53.005561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:53.005577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.388 [2024-07-16 00:27:53.014584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.388 [2024-07-16 00:27:53.014599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.023719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.023735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.032254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.032270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.040691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.040707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.049125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.049140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.058150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.058166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.066049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.066064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.074756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.074772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.082980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.082996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.091108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.091122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.099991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.100005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.107521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.107536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.116765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.116779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.125461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.125475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.134489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.134504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.143274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.143293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.152124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.152139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.160501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.160516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.168835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.168850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.177619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.177634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.186470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.186485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.195095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.195110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.203496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.203512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.211742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.211757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.220513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.220528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.228747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.228761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.237609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.237624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.245823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.245838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.255013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.255028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.263424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.263439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.648 [2024-07-16 00:27:53.272029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.648 [2024-07-16 00:27:53.272043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.280896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.280912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.289484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.289498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.298030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.298045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.306870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.306887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.315464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.315479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.324551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.324566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.333581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.333596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.342485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.342500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.351080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.351095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.359774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.359790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.368111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.368126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.376736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.376751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.385456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.385471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.393974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.393989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.402975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.402990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.411293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.411309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.420522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.420537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.428633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.428648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.437800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.437815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.445966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.445982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.454904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.454919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.463439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.463453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.472552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.472567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.480627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.480641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.489690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.489704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.497955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.497969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.506804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.506819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.515743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.515758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.524206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.524221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.909 [2024-07-16 00:27:53.532910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.909 [2024-07-16 00:27:53.532925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.541420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.541435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.550152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.550167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.558646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.558661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.567806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.567821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.576203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.576218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.585355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.585370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.593709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.593724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.602468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.602483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.611039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.611054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.620198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.620212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.629219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.629237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.637805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.637820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.646573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.646588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.655324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.655338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.664103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.664118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.673053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.673068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.681500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.681515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.690474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.690489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.699339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.699354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.708347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.708363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.717209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.717224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.725301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.725316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.169 [2024-07-16 00:27:53.734225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.169 [2024-07-16 00:27:53.734244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.170 [2024-07-16 00:27:53.743375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.170 [2024-07-16 00:27:53.743391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.170 [2024-07-16 00:27:53.751844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.170 [2024-07-16 00:27:53.751860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.170 [2024-07-16 00:27:53.760871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.170 [2024-07-16 00:27:53.760886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.170 [2024-07-16 00:27:53.769386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.170 [2024-07-16 00:27:53.769401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.170 [2024-07-16 00:27:53.777950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.170 [2024-07-16 00:27:53.777966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.170 [2024-07-16 00:27:53.787201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.170 [2024-07-16 00:27:53.787215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.170 [2024-07-16 00:27:53.795467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.170 [2024-07-16 00:27:53.795482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.804564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.804580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.812656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.812671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.821685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.821700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.829950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.829965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.838328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.838343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.847352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.847368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.856415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.856431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.864584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.864598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.873689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.873703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.882083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.882098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.890766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.890781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.899346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.899361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.908318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.908333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.917059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.917074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.925636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.925650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.934658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.934673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.943064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.943079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.952211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.952226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.957955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.957969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 00:16:40.430 Latency(us) 00:16:40.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.430 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:40.430 Nvme1n1 : 5.01 19246.61 150.36 0.00 0.00 6644.13 2416.64 18786.99 00:16:40.430 =================================================================================================================== 00:16:40.430 Total : 19246.61 150.36 0.00 0.00 6644.13 2416.64 18786.99 00:16:40.430 [2024-07-16 00:27:53.965971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.965983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.973993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.974005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.982019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.982032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.990039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.990050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:53.998055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:53.998065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:54.006073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:54.006082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:54.014093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:54.014101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:54.022114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:54.022122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:54.030134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:54.030141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:54.038154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:54.038162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:54.046175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:54.046183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.430 [2024-07-16 00:27:54.054196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.430 [2024-07-16 00:27:54.054204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.690 [2024-07-16 00:27:54.062217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.690 [2024-07-16 00:27:54.062225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.690 [2024-07-16 00:27:54.070241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.690 [2024-07-16 00:27:54.070251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.690 [2024-07-16 00:27:54.078259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.690 [2024-07-16 00:27:54.078267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.690 [2024-07-16 00:27:54.086279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.690 [2024-07-16 00:27:54.086295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1058014) - No such process 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1058014 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:40.690 delay0 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.690 00:27:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:40.690 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.690 [2024-07-16 00:27:54.262450] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:47.318 Initializing NVMe Controllers 00:16:47.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:47.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:47.318 Initialization complete. Launching workers. 00:16:47.318 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 945 00:16:47.318 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1232, failed to submit 33 00:16:47.318 success 1088, unsuccess 144, failed 0 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.318 rmmod nvme_tcp 00:16:47.318 rmmod nvme_fabrics 00:16:47.318 rmmod nvme_keyring 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1055725 ']' 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1055725 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1055725 ']' 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1055725 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1055725 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1055725' 00:16:47.318 killing process with pid 1055725 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1055725 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1055725 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.318 00:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.269 00:28:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.269 00:16:49.269 real 0m34.163s 00:16:49.269 user 0m45.384s 00:16:49.269 sys 0m11.052s 00:16:49.269 00:28:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.269 00:28:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:49.269 ************************************ 00:16:49.269 END TEST nvmf_zcopy 00:16:49.269 ************************************ 00:16:49.269 00:28:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:49.269 00:28:02 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:49.269 00:28:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.269 00:28:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.269 00:28:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.530 ************************************ 00:16:49.530 START TEST nvmf_nmic 00:16:49.530 ************************************ 00:16:49.530 00:28:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:49.530 * Looking for test storage... 00:16:49.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.530 00:28:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.531 00:28:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:57.663 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:57.663 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:57.663 Found net devices under 0000:31:00.0: cvl_0_0 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:57.663 Found net devices under 0000:31:00.1: cvl_0_1 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.663 00:28:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:16:57.663 00:16:57.663 --- 10.0.0.2 ping statistics --- 00:16:57.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.663 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:16:57.663 00:16:57.663 --- 10.0.0.1 ping statistics --- 00:16:57.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.663 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1065041 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1065041 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1065041 ']' 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.663 00:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:57.663 [2024-07-16 00:28:11.267347] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:57.663 [2024-07-16 00:28:11.267442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.922 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.922 [2024-07-16 00:28:11.344209] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.922 [2024-07-16 00:28:11.410640] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.922 [2024-07-16 00:28:11.410678] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.922 [2024-07-16 00:28:11.410685] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.922 [2024-07-16 00:28:11.410692] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.922 [2024-07-16 00:28:11.410697] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.922 [2024-07-16 00:28:11.410834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.922 [2024-07-16 00:28:11.410852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.922 [2024-07-16 00:28:11.410986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.922 [2024-07-16 00:28:11.410987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.492 [2024-07-16 00:28:12.079884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.492 Malloc0 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.492 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.753 [2024-07-16 00:28:12.139278] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:58.753 test case1: single bdev can't be used in multiple subsystems 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.753 [2024-07-16 00:28:12.175177] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:58.753 [2024-07-16 00:28:12.175197] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:58.753 [2024-07-16 00:28:12.175204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.753 request: 00:16:58.753 { 00:16:58.753 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:58.753 "namespace": { 00:16:58.753 "bdev_name": "Malloc0", 00:16:58.753 "no_auto_visible": false 00:16:58.753 }, 00:16:58.753 "method": "nvmf_subsystem_add_ns", 00:16:58.753 "req_id": 1 00:16:58.753 } 00:16:58.753 Got JSON-RPC error response 00:16:58.753 response: 00:16:58.753 { 00:16:58.753 "code": -32602, 00:16:58.753 "message": "Invalid parameters" 00:16:58.753 } 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:58.753 Adding namespace failed - expected result. 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:58.753 test case2: host connect to nvmf target in multiple paths 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:58.753 [2024-07-16 00:28:12.187314] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.753 00:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.136 00:28:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:02.046 00:28:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.046 00:28:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:02.046 00:28:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.046 00:28:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:02.046 00:28:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:03.959 00:28:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:03.959 00:28:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:03.959 00:28:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.959 00:28:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:03.959 00:28:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.959 00:28:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:03.959 00:28:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:03.959 [global] 00:17:03.959 thread=1 00:17:03.959 invalidate=1 00:17:03.959 rw=write 00:17:03.959 time_based=1 00:17:03.959 runtime=1 00:17:03.959 ioengine=libaio 00:17:03.959 direct=1 00:17:03.959 bs=4096 00:17:03.959 iodepth=1 00:17:03.959 norandommap=0 00:17:03.959 numjobs=1 00:17:03.959 00:17:03.959 verify_dump=1 00:17:03.959 verify_backlog=512 00:17:03.959 verify_state_save=0 00:17:03.959 do_verify=1 00:17:03.959 verify=crc32c-intel 00:17:03.959 [job0] 00:17:03.959 filename=/dev/nvme0n1 00:17:03.959 Could not set queue depth (nvme0n1) 00:17:04.220 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.220 fio-3.35 00:17:04.220 Starting 1 thread 00:17:05.179 00:17:05.179 job0: (groupid=0, jobs=1): err= 0: pid=1066585: Tue Jul 16 00:28:18 2024 00:17:05.179 read: IOPS=14, BW=59.6KiB/s (61.1kB/s)(60.0KiB/1006msec) 00:17:05.179 slat (nsec): min=10661, max=27449, avg=25618.40, stdev=4153.73 00:17:05.179 clat (usec): min=41913, max=42974, avg=42175.55, stdev=416.47 00:17:05.179 lat (usec): min=41940, max=43001, avg=42201.17, stdev=416.53 00:17:05.179 clat percentiles (usec): 00:17:05.179 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:05.179 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:05.179 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:05.179 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:05.179 | 99.99th=[42730] 00:17:05.179 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:17:05.179 slat (usec): min=9, max=24937, avg=77.55, stdev=1100.85 00:17:05.179 clat (usec): min=390, max=842, avg=638.75, stdev=91.97 00:17:05.179 lat (usec): min=400, max=25705, avg=716.30, stdev=1110.89 00:17:05.179 clat percentiles (usec): 00:17:05.179 | 1.00th=[ 424], 5.00th=[ 449], 10.00th=[ 523], 20.00th=[ 545], 00:17:05.179 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[ 644], 60.00th=[ 668], 00:17:05.179 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 742], 95.00th=[ 766], 00:17:05.179 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 840], 99.95th=[ 840], 00:17:05.179 | 99.99th=[ 840] 00:17:05.179 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.179 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.179 lat (usec) : 500=7.59%, 750=81.40%, 1000=8.16% 00:17:05.179 lat (msec) : 50=2.85% 00:17:05.179 cpu : usr=1.19%, sys=1.69%, ctx=531, majf=0, minf=1 00:17:05.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.179 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.179 00:17:05.179 Run status group 0 (all jobs): 00:17:05.179 READ: bw=59.6KiB/s (61.1kB/s), 59.6KiB/s-59.6KiB/s (61.1kB/s-61.1kB/s), io=60.0KiB (61.4kB), run=1006-1006msec 00:17:05.179 WRITE: bw=2036KiB/s (2085kB/s), 2036KiB/s-2036KiB/s (2085kB/s-2085kB/s), io=2048KiB (2097kB), run=1006-1006msec 00:17:05.179 00:17:05.179 Disk stats (read/write): 00:17:05.179 nvme0n1: ios=37/512, merge=0/0, ticks=1485/278, in_queue=1763, util=98.50% 00:17:05.179 00:28:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.440 00:28:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.440 rmmod nvme_tcp 00:17:05.440 rmmod nvme_fabrics 00:17:05.440 rmmod nvme_keyring 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1065041 ']' 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1065041 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1065041 ']' 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1065041 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.440 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1065041 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1065041' 00:17:05.700 killing process with pid 1065041 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1065041 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1065041 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.700 00:28:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.247 00:28:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.247 00:17:08.247 real 0m18.429s 00:17:08.247 user 0m46.499s 00:17:08.247 sys 0m6.729s 00:17:08.247 00:28:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.247 00:28:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:08.247 ************************************ 00:17:08.247 END TEST nvmf_nmic 00:17:08.247 ************************************ 00:17:08.247 00:28:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:08.247 00:28:21 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:08.247 00:28:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:08.247 00:28:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.247 00:28:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.247 ************************************ 00:17:08.247 START TEST nvmf_fio_target 00:17:08.247 ************************************ 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:08.247 * Looking for test storage... 00:17:08.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.247 00:28:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.248 00:28:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:16.410 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:16.411 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:16.411 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:16.411 Found net devices under 0000:31:00.0: cvl_0_0 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:16.411 Found net devices under 0000:31:00.1: cvl_0_1 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:17:16.411 00:17:16.411 --- 10.0.0.2 ping statistics --- 00:17:16.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.411 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:17:16.411 00:17:16.411 --- 10.0.0.1 ping statistics --- 00:17:16.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.411 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1071536 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1071536 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1071536 ']' 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.411 00:28:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.411 [2024-07-16 00:28:29.960313] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:16.411 [2024-07-16 00:28:29.960370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.411 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.671 [2024-07-16 00:28:30.043207] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.672 [2024-07-16 00:28:30.114051] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.672 [2024-07-16 00:28:30.114094] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.672 [2024-07-16 00:28:30.114102] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.672 [2024-07-16 00:28:30.114109] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.672 [2024-07-16 00:28:30.114114] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.672 [2024-07-16 00:28:30.114277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.672 [2024-07-16 00:28:30.114332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.672 [2024-07-16 00:28:30.114637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.672 [2024-07-16 00:28:30.114637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.241 00:28:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.241 00:28:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:17.241 00:28:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.241 00:28:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:17.241 00:28:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.241 00:28:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.241 00:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:17.501 [2024-07-16 00:28:30.919301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.501 00:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.761 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:17.761 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.762 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:17.762 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.021 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:18.021 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.282 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:18.282 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:18.282 00:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.543 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:18.543 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.803 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:18.803 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.803 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:18.803 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:19.062 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:19.062 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:19.062 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:19.324 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:19.324 00:28:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.585 00:28:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.585 [2024-07-16 00:28:33.177297] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.585 00:28:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:19.844 00:28:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:20.104 00:28:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.490 00:28:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:21.490 00:28:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:21.490 00:28:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.490 00:28:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:21.490 00:28:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:21.490 00:28:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:23.402 00:28:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:23.402 00:28:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:23.402 00:28:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.402 00:28:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:23.402 00:28:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.402 00:28:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:23.402 00:28:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:23.661 [global] 00:17:23.661 thread=1 00:17:23.661 invalidate=1 00:17:23.661 rw=write 00:17:23.661 time_based=1 00:17:23.661 runtime=1 00:17:23.661 ioengine=libaio 00:17:23.661 direct=1 00:17:23.661 bs=4096 00:17:23.661 iodepth=1 00:17:23.661 norandommap=0 00:17:23.661 numjobs=1 00:17:23.661 00:17:23.661 verify_dump=1 00:17:23.661 verify_backlog=512 00:17:23.661 verify_state_save=0 00:17:23.661 do_verify=1 00:17:23.661 verify=crc32c-intel 00:17:23.661 [job0] 00:17:23.661 filename=/dev/nvme0n1 00:17:23.661 [job1] 00:17:23.661 filename=/dev/nvme0n2 00:17:23.661 [job2] 00:17:23.661 filename=/dev/nvme0n3 00:17:23.661 [job3] 00:17:23.661 filename=/dev/nvme0n4 00:17:23.661 Could not set queue depth (nvme0n1) 00:17:23.661 Could not set queue depth (nvme0n2) 00:17:23.661 Could not set queue depth (nvme0n3) 00:17:23.661 Could not set queue depth (nvme0n4) 00:17:23.921 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.921 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.921 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.921 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.921 fio-3.35 00:17:23.921 Starting 4 threads 00:17:25.306 00:17:25.306 job0: (groupid=0, jobs=1): err= 0: pid=1073184: Tue Jul 16 00:28:38 2024 00:17:25.306 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:25.306 slat (nsec): min=7222, max=45091, avg=27103.23, stdev=3720.35 00:17:25.306 clat (usec): min=826, max=1331, avg=1105.51, stdev=81.12 00:17:25.306 lat (usec): min=853, max=1359, avg=1132.61, stdev=81.71 00:17:25.306 clat percentiles (usec): 00:17:25.306 | 1.00th=[ 873], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1045], 00:17:25.306 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:17:25.306 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1221], 00:17:25.306 | 99.00th=[ 1287], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:17:25.306 | 99.99th=[ 1336] 00:17:25.306 write: IOPS=580, BW=2322KiB/s (2377kB/s)(2324KiB/1001msec); 0 zone resets 00:17:25.306 slat (nsec): min=9340, max=61616, avg=31071.23, stdev=10139.35 00:17:25.306 clat (usec): min=353, max=976, avg=676.92, stdev=107.16 00:17:25.306 lat (usec): min=364, max=1011, avg=708.00, stdev=111.37 00:17:25.306 clat percentiles (usec): 00:17:25.306 | 1.00th=[ 416], 5.00th=[ 469], 10.00th=[ 529], 20.00th=[ 586], 00:17:25.306 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[ 676], 60.00th=[ 709], 00:17:25.306 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 840], 00:17:25.306 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:17:25.306 | 99.99th=[ 979] 00:17:25.306 bw ( KiB/s): min= 4096, max= 4096, per=41.67%, avg=4096.00, stdev= 0.00, samples=1 00:17:25.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:25.306 lat (usec) : 500=3.39%, 750=36.05%, 1000=18.12% 00:17:25.306 lat (msec) : 2=42.45% 00:17:25.306 cpu : usr=1.20%, sys=5.40%, ctx=1094, majf=0, minf=1 00:17:25.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.306 issued rwts: total=512,581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.306 job1: (groupid=0, jobs=1): err= 0: pid=1073185: Tue Jul 16 00:28:38 2024 00:17:25.306 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:25.306 slat (nsec): min=6807, max=59950, avg=25893.69, stdev=5284.84 00:17:25.306 clat (usec): min=607, max=1053, avg=867.16, stdev=57.49 00:17:25.306 lat (usec): min=633, max=1079, avg=893.06, stdev=57.19 00:17:25.306 clat percentiles (usec): 00:17:25.306 | 1.00th=[ 668], 5.00th=[ 750], 10.00th=[ 799], 20.00th=[ 840], 00:17:25.306 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[ 873], 60.00th=[ 881], 00:17:25.306 | 70.00th=[ 898], 80.00th=[ 906], 90.00th=[ 922], 95.00th=[ 938], 00:17:25.306 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1057], 00:17:25.306 | 99.99th=[ 1057] 00:17:25.306 write: IOPS=945, BW=3780KiB/s (3871kB/s)(3784KiB/1001msec); 0 zone resets 00:17:25.306 slat (usec): min=9, max=7481, avg=36.80, stdev=246.09 00:17:25.306 clat (usec): min=238, max=945, avg=526.32, stdev=85.32 00:17:25.306 lat (usec): min=250, max=7820, avg=563.13, stdev=255.57 00:17:25.306 clat percentiles (usec): 00:17:25.306 | 1.00th=[ 334], 5.00th=[ 404], 10.00th=[ 424], 20.00th=[ 449], 00:17:25.306 | 30.00th=[ 482], 40.00th=[ 519], 50.00th=[ 537], 60.00th=[ 545], 00:17:25.306 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 644], 00:17:25.306 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 947], 99.95th=[ 947], 00:17:25.306 | 99.99th=[ 947] 00:17:25.306 bw ( KiB/s): min= 4096, max= 4096, per=41.67%, avg=4096.00, stdev= 0.00, samples=1 00:17:25.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:25.306 lat (usec) : 250=0.07%, 500=22.09%, 750=43.07%, 1000=34.64% 00:17:25.306 lat (msec) : 2=0.14% 00:17:25.306 cpu : usr=1.80%, sys=4.40%, ctx=1461, majf=0, minf=1 00:17:25.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.306 issued rwts: total=512,946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.306 job2: (groupid=0, jobs=1): err= 0: pid=1073186: Tue Jul 16 00:28:38 2024 00:17:25.306 read: IOPS=17, BW=69.4KiB/s (71.0kB/s)(72.0KiB/1038msec) 00:17:25.306 slat (nsec): min=9823, max=26045, avg=24834.28, stdev=3749.28 00:17:25.306 clat (usec): min=40964, max=42960, avg=41994.35, stdev=594.19 00:17:25.306 lat (usec): min=40990, max=42986, avg=42019.18, stdev=593.32 00:17:25.306 clat percentiles (usec): 00:17:25.306 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:25.306 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:25.306 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:17:25.306 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:25.306 | 99.99th=[42730] 00:17:25.306 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:17:25.306 slat (nsec): min=9608, max=49007, avg=26444.53, stdev=10424.74 00:17:25.306 clat (usec): min=288, max=662, avg=517.77, stdev=63.35 00:17:25.306 lat (usec): min=319, max=689, avg=544.21, stdev=67.17 00:17:25.306 clat percentiles (usec): 00:17:25.306 | 1.00th=[ 351], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 453], 00:17:25.306 | 30.00th=[ 494], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 545], 00:17:25.306 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 594], 95.00th=[ 611], 00:17:25.306 | 99.00th=[ 644], 99.50th=[ 644], 99.90th=[ 660], 99.95th=[ 660], 00:17:25.306 | 99.99th=[ 660] 00:17:25.306 bw ( KiB/s): min= 4096, max= 4096, per=41.67%, avg=4096.00, stdev= 0.00, samples=1 00:17:25.306 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:25.306 lat (usec) : 500=30.00%, 750=66.60% 00:17:25.306 lat (msec) : 50=3.40% 00:17:25.306 cpu : usr=0.77%, sys=1.16%, ctx=530, majf=0, minf=1 00:17:25.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.306 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.306 job3: (groupid=0, jobs=1): err= 0: pid=1073187: Tue Jul 16 00:28:38 2024 00:17:25.306 read: IOPS=15, BW=62.7KiB/s (64.2kB/s)(64.0KiB/1021msec) 00:17:25.306 slat (nsec): min=25349, max=38296, avg=26402.00, stdev=3176.49 00:17:25.306 clat (usec): min=1280, max=43037, avg=39776.01, stdev=10275.95 00:17:25.306 lat (usec): min=1305, max=43063, avg=39802.41, stdev=10276.24 00:17:25.306 clat percentiles (usec): 00:17:25.306 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[41681], 20.00th=[41681], 00:17:25.306 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:25.306 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:17:25.306 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:25.306 | 99.99th=[43254] 00:17:25.306 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:17:25.306 slat (nsec): min=10234, max=80718, avg=29910.08, stdev=10233.43 00:17:25.306 clat (usec): min=405, max=924, avg=713.98, stdev=96.57 00:17:25.307 lat (usec): min=438, max=957, avg=743.89, stdev=101.36 00:17:25.307 clat percentiles (usec): 00:17:25.307 | 1.00th=[ 449], 5.00th=[ 529], 10.00th=[ 578], 20.00th=[ 652], 00:17:25.307 | 30.00th=[ 676], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 750], 00:17:25.307 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 816], 95.00th=[ 848], 00:17:25.307 | 99.00th=[ 898], 99.50th=[ 906], 99.90th=[ 922], 99.95th=[ 922], 00:17:25.307 | 99.99th=[ 922] 00:17:25.307 bw ( KiB/s): min= 4096, max= 4096, per=41.67%, avg=4096.00, stdev= 0.00, samples=1 00:17:25.307 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:25.307 lat (usec) : 500=3.98%, 750=53.60%, 1000=39.39% 00:17:25.307 lat (msec) : 2=0.19%, 50=2.84% 00:17:25.307 cpu : usr=0.78%, sys=1.37%, ctx=530, majf=0, minf=1 00:17:25.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.307 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.307 00:17:25.307 Run status group 0 (all jobs): 00:17:25.307 READ: bw=4077KiB/s (4175kB/s), 62.7KiB/s-2046KiB/s (64.2kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1038msec 00:17:25.307 WRITE: bw=9830KiB/s (10.1MB/s), 1973KiB/s-3780KiB/s (2020kB/s-3871kB/s), io=9.96MiB (10.4MB), run=1001-1038msec 00:17:25.307 00:17:25.307 Disk stats (read/write): 00:17:25.307 nvme0n1: ios=463/512, merge=0/0, ticks=1333/294, in_queue=1627, util=96.59% 00:17:25.307 nvme0n2: ios=547/644, merge=0/0, ticks=679/334, in_queue=1013, util=98.57% 00:17:25.307 nvme0n3: ios=13/512, merge=0/0, ticks=547/256, in_queue=803, util=88.47% 00:17:25.307 nvme0n4: ios=34/512, merge=0/0, ticks=1357/344, in_queue=1701, util=96.79% 00:17:25.307 00:28:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:25.307 [global] 00:17:25.307 thread=1 00:17:25.307 invalidate=1 00:17:25.307 rw=randwrite 00:17:25.307 time_based=1 00:17:25.307 runtime=1 00:17:25.307 ioengine=libaio 00:17:25.307 direct=1 00:17:25.307 bs=4096 00:17:25.307 iodepth=1 00:17:25.307 norandommap=0 00:17:25.307 numjobs=1 00:17:25.307 00:17:25.307 verify_dump=1 00:17:25.307 verify_backlog=512 00:17:25.307 verify_state_save=0 00:17:25.307 do_verify=1 00:17:25.307 verify=crc32c-intel 00:17:25.307 [job0] 00:17:25.307 filename=/dev/nvme0n1 00:17:25.307 [job1] 00:17:25.307 filename=/dev/nvme0n2 00:17:25.307 [job2] 00:17:25.307 filename=/dev/nvme0n3 00:17:25.307 [job3] 00:17:25.307 filename=/dev/nvme0n4 00:17:25.307 Could not set queue depth (nvme0n1) 00:17:25.307 Could not set queue depth (nvme0n2) 00:17:25.307 Could not set queue depth (nvme0n3) 00:17:25.307 Could not set queue depth (nvme0n4) 00:17:25.567 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.567 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.567 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.567 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.567 fio-3.35 00:17:25.567 Starting 4 threads 00:17:26.947 00:17:26.947 job0: (groupid=0, jobs=1): err= 0: pid=1073713: Tue Jul 16 00:28:40 2024 00:17:26.947 read: IOPS=17, BW=69.1KiB/s (70.8kB/s)(72.0KiB/1042msec) 00:17:26.947 slat (nsec): min=9845, max=25700, avg=24286.44, stdev=3609.68 00:17:26.947 clat (usec): min=41902, max=42988, avg=42054.85, stdev=273.17 00:17:26.947 lat (usec): min=41927, max=43013, avg=42079.14, stdev=271.52 00:17:26.947 clat percentiles (usec): 00:17:26.947 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:26.948 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:26.948 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:26.948 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:26.948 | 99.99th=[42730] 00:17:26.948 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:17:26.948 slat (nsec): min=8991, max=48125, avg=26051.54, stdev=9309.49 00:17:26.948 clat (usec): min=173, max=945, avg=521.82, stdev=68.81 00:17:26.948 lat (usec): min=185, max=976, avg=547.87, stdev=71.74 00:17:26.948 clat percentiles (usec): 00:17:26.948 | 1.00th=[ 318], 5.00th=[ 408], 10.00th=[ 429], 20.00th=[ 453], 00:17:26.948 | 30.00th=[ 506], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 553], 00:17:26.948 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 594], 95.00th=[ 603], 00:17:26.948 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 947], 99.95th=[ 947], 00:17:26.948 | 99.99th=[ 947] 00:17:26.948 bw ( KiB/s): min= 4087, max= 4087, per=51.99%, avg=4087.00, stdev= 0.00, samples=1 00:17:26.948 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:26.948 lat (usec) : 250=0.19%, 500=27.74%, 750=68.49%, 1000=0.19% 00:17:26.948 lat (msec) : 50=3.40% 00:17:26.948 cpu : usr=0.86%, sys=1.06%, ctx=530, majf=0, minf=1 00:17:26.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.948 job1: (groupid=0, jobs=1): err= 0: pid=1073714: Tue Jul 16 00:28:40 2024 00:17:26.948 read: IOPS=22, BW=89.8KiB/s (92.0kB/s)(92.0KiB/1024msec) 00:17:26.948 slat (nsec): min=24778, max=26450, avg=25535.65, stdev=319.98 00:17:26.948 clat (usec): min=592, max=41460, avg=37489.75, stdev=11609.13 00:17:26.948 lat (usec): min=617, max=41486, avg=37515.29, stdev=11609.31 00:17:26.948 clat percentiles (usec): 00:17:26.948 | 1.00th=[ 594], 5.00th=[ 807], 10.00th=[40633], 20.00th=[40633], 00:17:26.948 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:26.948 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:26.948 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:26.948 | 99.99th=[41681] 00:17:26.948 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:17:26.948 slat (nsec): min=9302, max=50946, avg=24037.10, stdev=11011.73 00:17:26.948 clat (usec): min=111, max=659, avg=282.59, stdev=123.27 00:17:26.948 lat (usec): min=120, max=691, avg=306.62, stdev=130.11 00:17:26.948 clat percentiles (usec): 00:17:26.948 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 131], 20.00th=[ 143], 00:17:26.948 | 30.00th=[ 186], 40.00th=[ 249], 50.00th=[ 273], 60.00th=[ 297], 00:17:26.948 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 437], 95.00th=[ 506], 00:17:26.948 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 660], 99.95th=[ 660], 00:17:26.948 | 99.99th=[ 660] 00:17:26.948 bw ( KiB/s): min= 4087, max= 4087, per=51.99%, avg=4087.00, stdev= 0.00, samples=1 00:17:26.948 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:26.948 lat (usec) : 250=39.25%, 500=51.40%, 750=5.23%, 1000=0.19% 00:17:26.948 lat (msec) : 50=3.93% 00:17:26.948 cpu : usr=0.68%, sys=1.17%, ctx=536, majf=0, minf=1 00:17:26.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.948 job2: (groupid=0, jobs=1): err= 0: pid=1073715: Tue Jul 16 00:28:40 2024 00:17:26.948 read: IOPS=15, BW=63.0KiB/s (64.5kB/s)(64.0KiB/1016msec) 00:17:26.948 slat (nsec): min=24565, max=25662, avg=24875.94, stdev=273.22 00:17:26.948 clat (usec): min=1029, max=42998, avg=39694.37, stdev=10320.00 00:17:26.948 lat (usec): min=1053, max=43023, avg=39719.24, stdev=10320.04 00:17:26.948 clat percentiles (usec): 00:17:26.948 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[41681], 20.00th=[41681], 00:17:26.948 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:26.948 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:17:26.948 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:26.948 | 99.99th=[43254] 00:17:26.948 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:26.948 slat (nsec): min=9165, max=65212, avg=27483.40, stdev=8790.45 00:17:26.948 clat (usec): min=358, max=965, avg=707.54, stdev=104.80 00:17:26.948 lat (usec): min=368, max=996, avg=735.03, stdev=109.08 00:17:26.948 clat percentiles (usec): 00:17:26.948 | 1.00th=[ 441], 5.00th=[ 510], 10.00th=[ 578], 20.00th=[ 627], 00:17:26.948 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 734], 00:17:26.948 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 865], 00:17:26.948 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 963], 99.95th=[ 963], 00:17:26.948 | 99.99th=[ 963] 00:17:26.948 bw ( KiB/s): min= 4087, max= 4087, per=51.99%, avg=4087.00, stdev= 0.00, samples=1 00:17:26.948 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:26.948 lat (usec) : 500=4.73%, 750=58.90%, 1000=33.33% 00:17:26.948 lat (msec) : 2=0.19%, 50=2.84% 00:17:26.948 cpu : usr=0.69%, sys=1.38%, ctx=528, majf=0, minf=1 00:17:26.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.948 job3: (groupid=0, jobs=1): err= 0: pid=1073716: Tue Jul 16 00:28:40 2024 00:17:26.948 read: IOPS=14, BW=59.3KiB/s (60.7kB/s)(60.0KiB/1012msec) 00:17:26.948 slat (nsec): min=24681, max=26390, avg=25082.87, stdev=406.07 00:17:26.948 clat (usec): min=41893, max=42962, avg=42100.93, stdev=345.22 00:17:26.948 lat (usec): min=41920, max=42987, avg=42126.01, stdev=345.18 00:17:26.948 clat percentiles (usec): 00:17:26.948 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:17:26.948 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:26.948 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:26.948 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:26.948 | 99.99th=[42730] 00:17:26.948 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:17:26.948 slat (nsec): min=9560, max=50715, avg=27206.35, stdev=9418.30 00:17:26.948 clat (usec): min=377, max=1005, avg=706.93, stdev=113.98 00:17:26.948 lat (usec): min=409, max=1039, avg=734.14, stdev=117.86 00:17:26.948 clat percentiles (usec): 00:17:26.948 | 1.00th=[ 429], 5.00th=[ 486], 10.00th=[ 553], 20.00th=[ 611], 00:17:26.948 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 725], 60.00th=[ 750], 00:17:26.948 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 865], 00:17:26.948 | 99.00th=[ 922], 99.50th=[ 996], 99.90th=[ 1004], 99.95th=[ 1004], 00:17:26.948 | 99.99th=[ 1004] 00:17:26.948 bw ( KiB/s): min= 4087, max= 4087, per=51.99%, avg=4087.00, stdev= 0.00, samples=1 00:17:26.948 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:26.948 lat (usec) : 500=5.88%, 750=50.66%, 1000=40.42% 00:17:26.948 lat (msec) : 2=0.19%, 50=2.85% 00:17:26.948 cpu : usr=0.89%, sys=1.19%, ctx=529, majf=0, minf=1 00:17:26.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.948 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.948 00:17:26.948 Run status group 0 (all jobs): 00:17:26.948 READ: bw=276KiB/s (283kB/s), 59.3KiB/s-89.8KiB/s (60.7kB/s-92.0kB/s), io=288KiB (295kB), run=1012-1042msec 00:17:26.948 WRITE: bw=7862KiB/s (8050kB/s), 1965KiB/s-2024KiB/s (2013kB/s-2072kB/s), io=8192KiB (8389kB), run=1012-1042msec 00:17:26.948 00:17:26.948 Disk stats (read/write): 00:17:26.948 nvme0n1: ios=63/512, merge=0/0, ticks=600/259, in_queue=859, util=86.97% 00:17:26.948 nvme0n2: ios=40/512, merge=0/0, ticks=1139/142, in_queue=1281, util=97.24% 00:17:26.948 nvme0n3: ios=11/512, merge=0/0, ticks=424/338, in_queue=762, util=88.29% 00:17:26.948 nvme0n4: ios=32/512, merge=0/0, ticks=1352/345, in_queue=1697, util=97.22% 00:17:26.948 00:28:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:26.948 [global] 00:17:26.948 thread=1 00:17:26.948 invalidate=1 00:17:26.948 rw=write 00:17:26.948 time_based=1 00:17:26.948 runtime=1 00:17:26.948 ioengine=libaio 00:17:26.948 direct=1 00:17:26.948 bs=4096 00:17:26.948 iodepth=128 00:17:26.948 norandommap=0 00:17:26.948 numjobs=1 00:17:26.948 00:17:26.948 verify_dump=1 00:17:26.948 verify_backlog=512 00:17:26.948 verify_state_save=0 00:17:26.948 do_verify=1 00:17:26.948 verify=crc32c-intel 00:17:26.948 [job0] 00:17:26.948 filename=/dev/nvme0n1 00:17:26.948 [job1] 00:17:26.948 filename=/dev/nvme0n2 00:17:26.948 [job2] 00:17:26.948 filename=/dev/nvme0n3 00:17:26.948 [job3] 00:17:26.948 filename=/dev/nvme0n4 00:17:26.948 Could not set queue depth (nvme0n1) 00:17:26.948 Could not set queue depth (nvme0n2) 00:17:26.948 Could not set queue depth (nvme0n3) 00:17:26.948 Could not set queue depth (nvme0n4) 00:17:27.244 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.244 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.244 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.244 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.244 fio-3.35 00:17:27.244 Starting 4 threads 00:17:28.626 00:17:28.626 job0: (groupid=0, jobs=1): err= 0: pid=1074232: Tue Jul 16 00:28:42 2024 00:17:28.626 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:17:28.626 slat (nsec): min=911, max=7819.7k, avg=74130.40, stdev=425532.81 00:17:28.626 clat (usec): min=3541, max=20136, avg=9448.98, stdev=1758.57 00:17:28.626 lat (usec): min=3544, max=20144, avg=9523.11, stdev=1758.92 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 4555], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 7963], 00:17:28.626 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10028], 00:17:28.626 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:17:28.626 | 99.00th=[14484], 99.50th=[15533], 99.90th=[20055], 99.95th=[20055], 00:17:28.626 | 99.99th=[20055] 00:17:28.626 write: IOPS=6969, BW=27.2MiB/s (28.5MB/s)(27.3MiB/1002msec); 0 zone resets 00:17:28.626 slat (nsec): min=1542, max=10954k, avg=68963.77, stdev=420151.79 00:17:28.626 clat (usec): min=672, max=23382, avg=9075.67, stdev=2519.00 00:17:28.626 lat (usec): min=2508, max=31571, avg=9144.63, stdev=2534.24 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 4555], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7570], 00:17:28.626 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:17:28.626 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[12125], 95.00th=[14222], 00:17:28.626 | 99.00th=[19792], 99.50th=[19792], 99.90th=[22938], 99.95th=[22938], 00:17:28.626 | 99.99th=[23462] 00:17:28.626 bw ( KiB/s): min=28632, max=28632, per=28.89%, avg=28632.00, stdev= 0.00, samples=1 00:17:28.626 iops : min= 7158, max= 7158, avg=7158.00, stdev= 0.00, samples=1 00:17:28.626 lat (usec) : 750=0.01% 00:17:28.626 lat (msec) : 4=0.33%, 10=69.79%, 20=29.50%, 50=0.37% 00:17:28.626 cpu : usr=3.40%, sys=3.90%, ctx=620, majf=0, minf=1 00:17:28.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:28.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:28.626 issued rwts: total=6656,6983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:28.626 job1: (groupid=0, jobs=1): err= 0: pid=1074234: Tue Jul 16 00:28:42 2024 00:17:28.626 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:17:28.626 slat (nsec): min=876, max=18994k, avg=99729.59, stdev=770599.20 00:17:28.626 clat (usec): min=2573, max=56145, avg=12863.78, stdev=8227.01 00:17:28.626 lat (usec): min=2580, max=56172, avg=12963.51, stdev=8297.38 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 5473], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 7898], 00:17:28.626 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10945], 00:17:28.626 | 70.00th=[11994], 80.00th=[16712], 90.00th=[24249], 95.00th=[31327], 00:17:28.626 | 99.00th=[48497], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:17:28.626 | 99.99th=[56361] 00:17:28.626 write: IOPS=5990, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1003msec); 0 zone resets 00:17:28.626 slat (nsec): min=1563, max=10624k, avg=65696.59, stdev=414847.09 00:17:28.626 clat (usec): min=615, max=29400, avg=9035.20, stdev=3212.54 00:17:28.626 lat (usec): min=725, max=29410, avg=9100.90, stdev=3233.90 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 3261], 5.00th=[ 5211], 10.00th=[ 6194], 20.00th=[ 7308], 00:17:28.626 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8717], 00:17:28.626 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[12649], 95.00th=[16450], 00:17:28.626 | 99.00th=[21627], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:17:28.626 | 99.99th=[29492] 00:17:28.626 bw ( KiB/s): min=20480, max=26560, per=23.73%, avg=23520.00, stdev=4299.21, samples=2 00:17:28.626 iops : min= 5120, max= 6640, avg=5880.00, stdev=1074.80, samples=2 00:17:28.626 lat (usec) : 750=0.02%, 1000=0.02% 00:17:28.626 lat (msec) : 2=0.17%, 4=0.90%, 10=62.98%, 20=27.72%, 50=7.91% 00:17:28.626 lat (msec) : 100=0.27% 00:17:28.626 cpu : usr=3.89%, sys=4.29%, ctx=629, majf=0, minf=1 00:17:28.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:28.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:28.626 issued rwts: total=5632,6008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:28.626 job2: (groupid=0, jobs=1): err= 0: pid=1074240: Tue Jul 16 00:28:42 2024 00:17:28.626 read: IOPS=6909, BW=27.0MiB/s (28.3MB/s)(27.1MiB/1004msec) 00:17:28.626 slat (nsec): min=876, max=16366k, avg=69917.37, stdev=554110.56 00:17:28.626 clat (usec): min=2545, max=35875, avg=9724.07, stdev=3638.61 00:17:28.626 lat (usec): min=2551, max=38069, avg=9793.99, stdev=3675.11 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 3523], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7373], 00:17:28.626 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:17:28.626 | 70.00th=[10290], 80.00th=[11469], 90.00th=[13173], 95.00th=[16909], 00:17:28.626 | 99.00th=[24773], 99.50th=[26346], 99.90th=[35914], 99.95th=[35914], 00:17:28.626 | 99.99th=[35914] 00:17:28.626 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:17:28.626 slat (nsec): min=1515, max=10532k, avg=60392.77, stdev=490739.06 00:17:28.626 clat (usec): min=950, max=25363, avg=8359.56, stdev=2983.13 00:17:28.626 lat (usec): min=959, max=25370, avg=8419.96, stdev=3008.60 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 3064], 5.00th=[ 3982], 10.00th=[ 4424], 20.00th=[ 5735], 00:17:28.626 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8455], 00:17:28.626 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[11863], 95.00th=[12649], 00:17:28.626 | 99.00th=[18220], 99.50th=[18744], 99.90th=[18744], 99.95th=[21627], 00:17:28.626 | 99.99th=[25297] 00:17:28.626 bw ( KiB/s): min=26192, max=31152, per=28.93%, avg=28672.00, stdev=3507.25, samples=2 00:17:28.626 iops : min= 6548, max= 7788, avg=7168.00, stdev=876.81, samples=2 00:17:28.626 lat (usec) : 1000=0.02% 00:17:28.626 lat (msec) : 2=0.16%, 4=3.51%, 10=69.33%, 20=25.68%, 50=1.30% 00:17:28.626 cpu : usr=5.68%, sys=6.48%, ctx=377, majf=0, minf=1 00:17:28.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:28.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:28.626 issued rwts: total=6937,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:28.626 job3: (groupid=0, jobs=1): err= 0: pid=1074241: Tue Jul 16 00:28:42 2024 00:17:28.626 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:17:28.626 slat (nsec): min=903, max=10982k, avg=106877.70, stdev=640614.19 00:17:28.626 clat (usec): min=5935, max=40468, avg=13826.23, stdev=5585.84 00:17:28.626 lat (usec): min=5943, max=40474, avg=13933.11, stdev=5631.33 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 6783], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[10552], 00:17:28.626 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:17:28.626 | 70.00th=[13304], 80.00th=[16057], 90.00th=[20841], 95.00th=[26608], 00:17:28.626 | 99.00th=[35390], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:17:28.626 | 99.99th=[40633] 00:17:28.626 write: IOPS=4695, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1004msec); 0 zone resets 00:17:28.626 slat (nsec): min=1573, max=18379k, avg=102924.31, stdev=728610.09 00:17:28.626 clat (usec): min=2781, max=47635, avg=13337.91, stdev=5315.99 00:17:28.626 lat (usec): min=3577, max=47669, avg=13440.83, stdev=5376.43 00:17:28.626 clat percentiles (usec): 00:17:28.626 | 1.00th=[ 5669], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[10159], 00:17:28.626 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:17:28.626 | 70.00th=[13435], 80.00th=[15533], 90.00th=[18220], 95.00th=[25297], 00:17:28.626 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:17:28.626 | 99.99th=[47449] 00:17:28.626 bw ( KiB/s): min=16672, max=20192, per=18.60%, avg=18432.00, stdev=2489.02, samples=2 00:17:28.626 iops : min= 4168, max= 5048, avg=4608.00, stdev=622.25, samples=2 00:17:28.626 lat (msec) : 4=0.10%, 10=15.35%, 20=74.74%, 50=9.82% 00:17:28.626 cpu : usr=3.09%, sys=3.49%, ctx=496, majf=0, minf=1 00:17:28.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:28.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:28.626 issued rwts: total=4608,4714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:28.626 00:17:28.626 Run status group 0 (all jobs): 00:17:28.626 READ: bw=92.7MiB/s (97.2MB/s), 17.9MiB/s-27.0MiB/s (18.8MB/s-28.3MB/s), io=93.1MiB (97.6MB), run=1002-1004msec 00:17:28.626 WRITE: bw=96.8MiB/s (101MB/s), 18.3MiB/s-27.9MiB/s (19.2MB/s-29.2MB/s), io=97.2MiB (102MB), run=1002-1004msec 00:17:28.626 00:17:28.626 Disk stats (read/write): 00:17:28.626 nvme0n1: ios=5255/5632, merge=0/0, ticks=17759/17304, in_queue=35063, util=96.59% 00:17:28.626 nvme0n2: ios=4119/4168, merge=0/0, ticks=33754/18313, in_queue=52067, util=95.56% 00:17:28.626 nvme0n3: ios=5246/5632, merge=0/0, ticks=40211/42025, in_queue=82236, util=86.72% 00:17:28.626 nvme0n4: ios=3587/3584, merge=0/0, ticks=23125/22369, in_queue=45494, util=95.96% 00:17:28.626 00:28:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:28.626 [global] 00:17:28.626 thread=1 00:17:28.626 invalidate=1 00:17:28.626 rw=randwrite 00:17:28.626 time_based=1 00:17:28.626 runtime=1 00:17:28.626 ioengine=libaio 00:17:28.626 direct=1 00:17:28.626 bs=4096 00:17:28.626 iodepth=128 00:17:28.626 norandommap=0 00:17:28.626 numjobs=1 00:17:28.626 00:17:28.626 verify_dump=1 00:17:28.626 verify_backlog=512 00:17:28.626 verify_state_save=0 00:17:28.626 do_verify=1 00:17:28.626 verify=crc32c-intel 00:17:28.626 [job0] 00:17:28.626 filename=/dev/nvme0n1 00:17:28.626 [job1] 00:17:28.626 filename=/dev/nvme0n2 00:17:28.626 [job2] 00:17:28.626 filename=/dev/nvme0n3 00:17:28.626 [job3] 00:17:28.626 filename=/dev/nvme0n4 00:17:28.626 Could not set queue depth (nvme0n1) 00:17:28.626 Could not set queue depth (nvme0n2) 00:17:28.626 Could not set queue depth (nvme0n3) 00:17:28.626 Could not set queue depth (nvme0n4) 00:17:29.194 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:29.194 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:29.194 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:29.194 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:29.194 fio-3.35 00:17:29.194 Starting 4 threads 00:17:30.577 00:17:30.577 job0: (groupid=0, jobs=1): err= 0: pid=1074762: Tue Jul 16 00:28:43 2024 00:17:30.577 read: IOPS=3787, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1002msec) 00:17:30.577 slat (nsec): min=893, max=31120k, avg=130702.66, stdev=1191130.86 00:17:30.577 clat (usec): min=1305, max=103456, avg=15715.50, stdev=15291.04 00:17:30.577 lat (usec): min=1324, max=103483, avg=15846.20, stdev=15427.58 00:17:30.577 clat percentiles (msec): 00:17:30.577 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 8], 00:17:30.577 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:17:30.577 | 70.00th=[ 15], 80.00th=[ 21], 90.00th=[ 28], 95.00th=[ 48], 00:17:30.577 | 99.00th=[ 84], 99.50th=[ 84], 99.90th=[ 90], 99.95th=[ 101], 00:17:30.577 | 99.99th=[ 104] 00:17:30.577 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:17:30.577 slat (nsec): min=1447, max=13930k, avg=113034.33, stdev=742975.58 00:17:30.577 clat (usec): min=647, max=89715, avg=16196.31, stdev=12235.97 00:17:30.577 lat (usec): min=656, max=89740, avg=16309.34, stdev=12324.58 00:17:30.577 clat percentiles (usec): 00:17:30.577 | 1.00th=[ 2606], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 6259], 00:17:30.577 | 30.00th=[ 7111], 40.00th=[ 8586], 50.00th=[11469], 60.00th=[13960], 00:17:30.577 | 70.00th=[21890], 80.00th=[27657], 90.00th=[31851], 95.00th=[38011], 00:17:30.577 | 99.00th=[58459], 99.50th=[58459], 99.90th=[65799], 99.95th=[76022], 00:17:30.577 | 99.99th=[89654] 00:17:30.577 bw ( KiB/s): min=14200, max=18568, per=18.83%, avg=16384.00, stdev=3088.64, samples=2 00:17:30.577 iops : min= 3550, max= 4642, avg=4096.00, stdev=772.16, samples=2 00:17:30.577 lat (usec) : 750=0.04% 00:17:30.577 lat (msec) : 2=0.23%, 4=3.68%, 10=40.93%, 20=27.04%, 50=24.90% 00:17:30.577 lat (msec) : 100=3.16%, 250=0.03% 00:17:30.577 cpu : usr=2.90%, sys=3.60%, ctx=307, majf=0, minf=1 00:17:30.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.577 issued rwts: total=3795,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.577 job1: (groupid=0, jobs=1): err= 0: pid=1074763: Tue Jul 16 00:28:43 2024 00:17:30.577 read: IOPS=4184, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1003msec) 00:17:30.577 slat (nsec): min=911, max=18061k, avg=125328.10, stdev=956069.28 00:17:30.577 clat (usec): min=1083, max=72064, avg=16051.31, stdev=11646.21 00:17:30.577 lat (usec): min=4054, max=72107, avg=16176.64, stdev=11756.62 00:17:30.577 clat percentiles (usec): 00:17:30.577 | 1.00th=[ 4686], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7767], 00:17:30.577 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[12125], 60.00th=[13698], 00:17:30.577 | 70.00th=[19530], 80.00th=[23462], 90.00th=[30540], 95.00th=[41681], 00:17:30.577 | 99.00th=[56886], 99.50th=[59507], 99.90th=[59507], 99.95th=[67634], 00:17:30.577 | 99.99th=[71828] 00:17:30.577 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:17:30.577 slat (nsec): min=1553, max=14011k, avg=96574.57, stdev=702131.66 00:17:30.577 clat (usec): min=2968, max=45302, avg=12838.04, stdev=6816.06 00:17:30.577 lat (usec): min=2976, max=45324, avg=12934.61, stdev=6879.96 00:17:30.577 clat percentiles (usec): 00:17:30.577 | 1.00th=[ 4080], 5.00th=[ 4817], 10.00th=[ 6587], 20.00th=[ 7439], 00:17:30.577 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[10421], 60.00th=[12911], 00:17:30.577 | 70.00th=[14877], 80.00th=[21103], 90.00th=[22414], 95.00th=[26870], 00:17:30.577 | 99.00th=[31327], 99.50th=[33424], 99.90th=[33424], 99.95th=[39584], 00:17:30.577 | 99.99th=[45351] 00:17:30.577 bw ( KiB/s): min=16384, max=20264, per=21.06%, avg=18324.00, stdev=2743.57, samples=2 00:17:30.577 iops : min= 4096, max= 5066, avg=4581.00, stdev=685.89, samples=2 00:17:30.577 lat (msec) : 2=0.01%, 4=0.41%, 10=45.86%, 20=28.42%, 50=23.50% 00:17:30.577 lat (msec) : 100=1.81% 00:17:30.577 cpu : usr=3.29%, sys=4.29%, ctx=303, majf=0, minf=1 00:17:30.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.577 issued rwts: total=4197,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.577 job2: (groupid=0, jobs=1): err= 0: pid=1074764: Tue Jul 16 00:28:43 2024 00:17:30.577 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:17:30.577 slat (nsec): min=946, max=8629.9k, avg=60274.18, stdev=408859.13 00:17:30.577 clat (usec): min=3225, max=18603, avg=7799.68, stdev=1697.00 00:17:30.577 lat (usec): min=3228, max=18609, avg=7859.95, stdev=1726.49 00:17:30.577 clat percentiles (usec): 00:17:30.577 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6783], 00:17:30.577 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7898], 00:17:30.577 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[10421], 00:17:30.577 | 99.00th=[15008], 99.50th=[16450], 99.90th=[18482], 99.95th=[18482], 00:17:30.577 | 99.99th=[18482] 00:17:30.577 write: IOPS=8463, BW=33.1MiB/s (34.7MB/s)(33.2MiB/1003msec); 0 zone resets 00:17:30.577 slat (nsec): min=1553, max=6740.0k, avg=55826.89, stdev=317123.06 00:17:30.577 clat (usec): min=799, max=30564, avg=7455.75, stdev=3020.64 00:17:30.577 lat (usec): min=1206, max=31236, avg=7511.58, stdev=3038.28 00:17:30.577 clat percentiles (usec): 00:17:30.577 | 1.00th=[ 3392], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 5932], 00:17:30.577 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 7242], 00:17:30.577 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[ 9503], 95.00th=[11600], 00:17:30.577 | 99.00th=[22676], 99.50th=[27132], 99.90th=[30278], 99.95th=[30540], 00:17:30.577 | 99.99th=[30540] 00:17:30.577 bw ( KiB/s): min=32040, max=34848, per=38.43%, avg=33444.00, stdev=1985.56, samples=2 00:17:30.577 iops : min= 8010, max= 8712, avg=8361.00, stdev=496.39, samples=2 00:17:30.577 lat (usec) : 1000=0.01% 00:17:30.577 lat (msec) : 2=0.11%, 4=1.17%, 10=91.01%, 20=7.00%, 50=0.70% 00:17:30.577 cpu : usr=3.49%, sys=7.19%, ctx=886, majf=0, minf=1 00:17:30.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.577 issued rwts: total=8192,8489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.577 job3: (groupid=0, jobs=1): err= 0: pid=1074765: Tue Jul 16 00:28:43 2024 00:17:30.577 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:30.577 slat (nsec): min=891, max=13318k, avg=102735.66, stdev=674252.97 00:17:30.577 clat (usec): min=1187, max=58228, avg=13137.01, stdev=6989.13 00:17:30.577 lat (usec): min=1193, max=58235, avg=13239.74, stdev=7057.22 00:17:30.577 clat percentiles (usec): 00:17:30.577 | 1.00th=[ 2737], 5.00th=[ 5211], 10.00th=[ 7504], 20.00th=[ 8717], 00:17:30.577 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11731], 60.00th=[12518], 00:17:30.577 | 70.00th=[13435], 80.00th=[14877], 90.00th=[21890], 95.00th=[26084], 00:17:30.577 | 99.00th=[46400], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:17:30.577 | 99.99th=[58459] 00:17:30.577 write: IOPS=4612, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:17:30.577 slat (nsec): min=1513, max=10157k, avg=103379.23, stdev=599098.46 00:17:30.577 clat (usec): min=711, max=61542, avg=14352.13, stdev=7700.78 00:17:30.577 lat (usec): min=720, max=61548, avg=14455.51, stdev=7748.39 00:17:30.577 clat percentiles (usec): 00:17:30.577 | 1.00th=[ 1549], 5.00th=[ 5997], 10.00th=[ 7832], 20.00th=[ 8717], 00:17:30.577 | 30.00th=[10028], 40.00th=[11076], 50.00th=[11469], 60.00th=[12256], 00:17:30.577 | 70.00th=[15270], 80.00th=[21103], 90.00th=[25560], 95.00th=[28443], 00:17:30.577 | 99.00th=[42730], 99.50th=[46400], 99.90th=[61604], 99.95th=[61604], 00:17:30.577 | 99.99th=[61604] 00:17:30.577 bw ( KiB/s): min=16656, max=20208, per=21.18%, avg=18432.00, stdev=2511.64, samples=2 00:17:30.577 iops : min= 4164, max= 5052, avg=4608.00, stdev=627.91, samples=2 00:17:30.577 lat (usec) : 750=0.03% 00:17:30.577 lat (msec) : 2=0.77%, 4=1.29%, 10=26.61%, 20=52.57%, 50=18.23% 00:17:30.577 lat (msec) : 100=0.51% 00:17:30.577 cpu : usr=3.39%, sys=3.89%, ctx=475, majf=0, minf=1 00:17:30.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.577 issued rwts: total=4608,4626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.577 00:17:30.577 Run status group 0 (all jobs): 00:17:30.577 READ: bw=81.0MiB/s (84.9MB/s), 14.8MiB/s-31.9MiB/s (15.5MB/s-33.5MB/s), io=81.2MiB (85.2MB), run=1002-1003msec 00:17:30.577 WRITE: bw=85.0MiB/s (89.1MB/s), 16.0MiB/s-33.1MiB/s (16.7MB/s-34.7MB/s), io=85.2MiB (89.4MB), run=1002-1003msec 00:17:30.577 00:17:30.577 Disk stats (read/write): 00:17:30.577 nvme0n1: ios=3413/3584, merge=0/0, ticks=35361/31892, in_queue=67253, util=96.39% 00:17:30.577 nvme0n2: ios=3504/3584, merge=0/0, ticks=33410/25684, in_queue=59094, util=97.25% 00:17:30.577 nvme0n3: ios=6888/7168, merge=0/0, ticks=37114/34990, in_queue=72104, util=99.47% 00:17:30.577 nvme0n4: ios=3584/3647, merge=0/0, ticks=33303/28767, in_queue=62070, util=89.53% 00:17:30.577 00:28:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:30.577 00:28:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1075077 00:17:30.577 00:28:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:30.577 00:28:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:30.578 [global] 00:17:30.578 thread=1 00:17:30.578 invalidate=1 00:17:30.578 rw=read 00:17:30.578 time_based=1 00:17:30.578 runtime=10 00:17:30.578 ioengine=libaio 00:17:30.578 direct=1 00:17:30.578 bs=4096 00:17:30.578 iodepth=1 00:17:30.578 norandommap=1 00:17:30.578 numjobs=1 00:17:30.578 00:17:30.578 [job0] 00:17:30.578 filename=/dev/nvme0n1 00:17:30.578 [job1] 00:17:30.578 filename=/dev/nvme0n2 00:17:30.578 [job2] 00:17:30.578 filename=/dev/nvme0n3 00:17:30.578 [job3] 00:17:30.578 filename=/dev/nvme0n4 00:17:30.578 Could not set queue depth (nvme0n1) 00:17:30.578 Could not set queue depth (nvme0n2) 00:17:30.578 Could not set queue depth (nvme0n3) 00:17:30.578 Could not set queue depth (nvme0n4) 00:17:30.578 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.578 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.578 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.578 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.578 fio-3.35 00:17:30.578 Starting 4 threads 00:17:33.961 00:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:33.961 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9166848, buflen=4096 00:17:33.961 fio: pid=1075288, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:33.961 00:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:33.961 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=3571712, buflen=4096 00:17:33.961 fio: pid=1075287, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:33.961 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:33.961 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:33.961 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7520256, buflen=4096 00:17:33.961 fio: pid=1075285, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:33.961 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:33.961 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:33.961 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=393216, buflen=4096 00:17:33.961 fio: pid=1075286, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:33.961 00:17:33.961 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1075285: Tue Jul 16 00:28:47 2024 00:17:33.961 read: IOPS=629, BW=2519KiB/s (2579kB/s)(7344KiB/2916msec) 00:17:33.961 slat (usec): min=4, max=610, avg=19.52, stdev=17.31 00:17:33.961 clat (usec): min=685, max=42974, avg=1563.50, stdev=4738.40 00:17:33.961 lat (usec): min=693, max=42999, avg=1582.70, stdev=4740.28 00:17:33.961 clat percentiles (usec): 00:17:33.961 | 1.00th=[ 799], 5.00th=[ 848], 10.00th=[ 873], 20.00th=[ 906], 00:17:33.961 | 30.00th=[ 930], 40.00th=[ 979], 50.00th=[ 1029], 60.00th=[ 1057], 00:17:33.961 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:17:33.961 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:17:33.961 | 99.99th=[42730] 00:17:33.961 bw ( KiB/s): min= 88, max= 4064, per=42.96%, avg=2812.80, stdev=1579.13, samples=5 00:17:33.961 iops : min= 22, max= 1016, avg=703.20, stdev=394.78, samples=5 00:17:33.961 lat (usec) : 750=0.16%, 1000=44.64% 00:17:33.961 lat (msec) : 2=53.78%, 50=1.36% 00:17:33.961 cpu : usr=0.58%, sys=1.89%, ctx=1839, majf=0, minf=1 00:17:33.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.961 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.961 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.961 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1075286: Tue Jul 16 00:28:47 2024 00:17:33.961 read: IOPS=31, BW=125KiB/s (128kB/s)(384KiB/3081msec) 00:17:33.961 slat (usec): min=4, max=1748, avg=41.00, stdev=175.84 00:17:33.961 clat (usec): min=762, max=42973, avg=32037.16, stdev=17547.21 00:17:33.961 lat (usec): min=767, max=44154, avg=32078.45, stdev=17564.76 00:17:33.961 clat percentiles (usec): 00:17:33.961 | 1.00th=[ 766], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 1090], 00:17:33.961 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:17:33.961 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:33.961 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:33.961 | 99.99th=[42730] 00:17:33.961 bw ( KiB/s): min= 96, max= 264, per=1.97%, avg=129.60, stdev=75.13, samples=5 00:17:33.961 iops : min= 24, max= 66, avg=32.40, stdev=18.78, samples=5 00:17:33.961 lat (usec) : 1000=16.49% 00:17:33.961 lat (msec) : 2=7.22%, 50=75.26% 00:17:33.961 cpu : usr=0.00%, sys=0.13%, ctx=101, majf=0, minf=1 00:17:33.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.961 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.961 issued rwts: total=97,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.962 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1075287: Tue Jul 16 00:28:47 2024 00:17:33.962 read: IOPS=318, BW=1273KiB/s (1304kB/s)(3488KiB/2739msec) 00:17:33.962 slat (usec): min=4, max=15885, avg=46.81, stdev=619.19 00:17:33.962 clat (usec): min=287, max=43008, avg=3089.27, stdev=9396.83 00:17:33.962 lat (usec): min=295, max=43033, avg=3136.12, stdev=9412.18 00:17:33.962 clat percentiles (usec): 00:17:33.962 | 1.00th=[ 433], 5.00th=[ 529], 10.00th=[ 611], 20.00th=[ 668], 00:17:33.962 | 30.00th=[ 799], 40.00th=[ 857], 50.00th=[ 881], 60.00th=[ 906], 00:17:33.962 | 70.00th=[ 922], 80.00th=[ 947], 90.00th=[ 971], 95.00th=[41681], 00:17:33.962 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:17:33.962 | 99.99th=[43254] 00:17:33.962 bw ( KiB/s): min= 96, max= 2920, per=14.18%, avg=928.00, stdev=1182.80, samples=5 00:17:33.962 iops : min= 24, max= 730, avg=232.00, stdev=295.70, samples=5 00:17:33.962 lat (usec) : 500=3.32%, 750=23.48%, 1000=66.09% 00:17:33.962 lat (msec) : 2=1.26%, 4=0.23%, 50=5.50% 00:17:33.962 cpu : usr=0.29%, sys=0.66%, ctx=878, majf=0, minf=1 00:17:33.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.962 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.962 issued rwts: total=873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.962 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1075288: Tue Jul 16 00:28:47 2024 00:17:33.962 read: IOPS=872, BW=3489KiB/s (3572kB/s)(8952KiB/2566msec) 00:17:33.962 slat (nsec): min=6495, max=62405, avg=24302.42, stdev=4275.54 00:17:33.962 clat (usec): min=514, max=42478, avg=1115.68, stdev=2730.84 00:17:33.962 lat (usec): min=538, max=42486, avg=1139.98, stdev=2730.68 00:17:33.962 clat percentiles (usec): 00:17:33.962 | 1.00th=[ 685], 5.00th=[ 758], 10.00th=[ 816], 20.00th=[ 865], 00:17:33.962 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 963], 00:17:33.962 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1045], 00:17:33.962 | 99.00th=[ 1156], 99.50th=[ 4047], 99.90th=[42206], 99.95th=[42730], 00:17:33.962 | 99.99th=[42730] 00:17:33.962 bw ( KiB/s): min= 1768, max= 4136, per=53.13%, avg=3478.40, stdev=1035.81, samples=5 00:17:33.962 iops : min= 442, max= 1034, avg=869.60, stdev=258.95, samples=5 00:17:33.962 lat (usec) : 750=4.11%, 1000=78.20% 00:17:33.962 lat (msec) : 2=17.02%, 4=0.09%, 10=0.09%, 50=0.45% 00:17:33.962 cpu : usr=0.66%, sys=2.73%, ctx=2239, majf=0, minf=2 00:17:33.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.962 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.962 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.962 00:17:33.962 Run status group 0 (all jobs): 00:17:33.962 READ: bw=6546KiB/s (6703kB/s), 125KiB/s-3489KiB/s (128kB/s-3572kB/s), io=19.7MiB (20.7MB), run=2566-3081msec 00:17:33.962 00:17:33.962 Disk stats (read/write): 00:17:33.962 nvme0n1: ios=1802/0, merge=0/0, ticks=2694/0, in_queue=2694, util=94.79% 00:17:33.962 nvme0n2: ios=90/0, merge=0/0, ticks=2826/0, in_queue=2826, util=95.27% 00:17:33.962 nvme0n3: ios=678/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.03% 00:17:33.962 nvme0n4: ios=2005/0, merge=0/0, ticks=2229/0, in_queue=2229, util=96.02% 00:17:33.962 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:33.962 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:34.223 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:34.223 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:34.223 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:34.223 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:34.484 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:34.484 00:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:34.745 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:34.745 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:34.745 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:34.745 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1075077 00:17:34.745 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:34.745 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:35.006 nvmf hotplug test: fio failed as expected 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.006 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.006 rmmod nvme_tcp 00:17:35.006 rmmod nvme_fabrics 00:17:35.006 rmmod nvme_keyring 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1071536 ']' 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1071536 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1071536 ']' 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1071536 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1071536 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1071536' 00:17:35.266 killing process with pid 1071536 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1071536 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1071536 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.266 00:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.813 00:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.813 00:17:37.813 real 0m29.500s 00:17:37.813 user 2m33.239s 00:17:37.813 sys 0m9.707s 00:17:37.813 00:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.813 00:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.813 ************************************ 00:17:37.813 END TEST nvmf_fio_target 00:17:37.813 ************************************ 00:17:37.813 00:28:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:37.813 00:28:50 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:37.813 00:28:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:37.813 00:28:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.813 00:28:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.813 ************************************ 00:17:37.813 START TEST nvmf_bdevio 00:17:37.813 ************************************ 00:17:37.813 00:28:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:37.813 * Looking for test storage... 00:17:37.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.813 00:28:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.814 00:28:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.956 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:45.957 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:45.957 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:45.957 Found net devices under 0000:31:00.0: cvl_0_0 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:45.957 Found net devices under 0000:31:00.1: cvl_0_1 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.957 00:28:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:45.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:17:45.957 00:17:45.957 --- 10.0.0.2 ping statistics --- 00:17:45.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.957 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:17:45.957 00:17:45.957 --- 10.0.0.1 ping statistics --- 00:17:45.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.957 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1080744 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1080744 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1080744 ']' 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.957 00:28:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:45.957 [2024-07-16 00:28:59.254990] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:45.957 [2024-07-16 00:28:59.255059] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.957 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.957 [2024-07-16 00:28:59.350159] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.957 [2024-07-16 00:28:59.442619] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.957 [2024-07-16 00:28:59.442678] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.957 [2024-07-16 00:28:59.442687] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.957 [2024-07-16 00:28:59.442694] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.957 [2024-07-16 00:28:59.442700] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.957 [2024-07-16 00:28:59.442862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:45.957 [2024-07-16 00:28:59.443012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:45.957 [2024-07-16 00:28:59.443175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:45.957 [2024-07-16 00:28:59.443176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.553 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.553 [2024-07-16 00:29:00.096513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.554 Malloc0 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.554 [2024-07-16 00:29:00.163696] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:46.554 { 00:17:46.554 "params": { 00:17:46.554 "name": "Nvme$subsystem", 00:17:46.554 "trtype": "$TEST_TRANSPORT", 00:17:46.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:46.554 "adrfam": "ipv4", 00:17:46.554 "trsvcid": "$NVMF_PORT", 00:17:46.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:46.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:46.554 "hdgst": ${hdgst:-false}, 00:17:46.554 "ddgst": ${ddgst:-false} 00:17:46.554 }, 00:17:46.554 "method": "bdev_nvme_attach_controller" 00:17:46.554 } 00:17:46.554 EOF 00:17:46.554 )") 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:46.554 00:29:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:46.554 "params": { 00:17:46.554 "name": "Nvme1", 00:17:46.554 "trtype": "tcp", 00:17:46.554 "traddr": "10.0.0.2", 00:17:46.554 "adrfam": "ipv4", 00:17:46.554 "trsvcid": "4420", 00:17:46.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:46.554 "hdgst": false, 00:17:46.554 "ddgst": false 00:17:46.554 }, 00:17:46.554 "method": "bdev_nvme_attach_controller" 00:17:46.554 }' 00:17:46.813 [2024-07-16 00:29:00.228404] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:46.813 [2024-07-16 00:29:00.228468] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081021 ] 00:17:46.813 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.813 [2024-07-16 00:29:00.300120] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:46.813 [2024-07-16 00:29:00.375305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.813 [2024-07-16 00:29:00.375435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.813 [2024-07-16 00:29:00.375438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.072 I/O targets: 00:17:47.072 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:47.072 00:17:47.072 00:17:47.072 CUnit - A unit testing framework for C - Version 2.1-3 00:17:47.072 http://cunit.sourceforge.net/ 00:17:47.072 00:17:47.072 00:17:47.072 Suite: bdevio tests on: Nvme1n1 00:17:47.072 Test: blockdev write read block ...passed 00:17:47.072 Test: blockdev write zeroes read block ...passed 00:17:47.072 Test: blockdev write zeroes read no split ...passed 00:17:47.331 Test: blockdev write zeroes read split ...passed 00:17:47.331 Test: blockdev write zeroes read split partial ...passed 00:17:47.331 Test: blockdev reset ...[2024-07-16 00:29:00.737095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:47.331 [2024-07-16 00:29:00.737154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a04d0 (9): Bad file descriptor 00:17:47.332 [2024-07-16 00:29:00.924167] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:47.332 passed 00:17:47.590 Test: blockdev write read 8 blocks ...passed 00:17:47.590 Test: blockdev write read size > 128k ...passed 00:17:47.590 Test: blockdev write read invalid size ...passed 00:17:47.590 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:47.590 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:47.590 Test: blockdev write read max offset ...passed 00:17:47.590 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:47.590 Test: blockdev writev readv 8 blocks ...passed 00:17:47.590 Test: blockdev writev readv 30 x 1block ...passed 00:17:47.590 Test: blockdev writev readv block ...passed 00:17:47.590 Test: blockdev writev readv size > 128k ...passed 00:17:47.590 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:47.590 Test: blockdev comparev and writev ...[2024-07-16 00:29:01.191761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.191785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.590 [2024-07-16 00:29:01.191796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.191801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:47.590 [2024-07-16 00:29:01.192348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.192357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:47.590 [2024-07-16 00:29:01.192366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.192371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:47.590 [2024-07-16 00:29:01.192903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.192910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:47.590 [2024-07-16 00:29:01.192920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.192925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:47.590 [2024-07-16 00:29:01.193434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.193441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:47.590 [2024-07-16 00:29:01.193450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:47.590 [2024-07-16 00:29:01.193455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:47.850 passed 00:17:47.850 Test: blockdev nvme passthru rw ...passed 00:17:47.850 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:29:01.278185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.850 [2024-07-16 00:29:01.278195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:47.850 [2024-07-16 00:29:01.278577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.850 [2024-07-16 00:29:01.278585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:47.850 [2024-07-16 00:29:01.279006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.850 [2024-07-16 00:29:01.279014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:47.850 [2024-07-16 00:29:01.279395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.850 [2024-07-16 00:29:01.279402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:47.850 passed 00:17:47.850 Test: blockdev nvme admin passthru ...passed 00:17:47.850 Test: blockdev copy ...passed 00:17:47.850 00:17:47.850 Run Summary: Type Total Ran Passed Failed Inactive 00:17:47.850 suites 1 1 n/a 0 0 00:17:47.850 tests 23 23 23 0 0 00:17:47.850 asserts 152 152 152 0 n/a 00:17:47.850 00:17:47.850 Elapsed time = 1.557 seconds 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:47.850 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:47.850 rmmod nvme_tcp 00:17:48.110 rmmod nvme_fabrics 00:17:48.110 rmmod nvme_keyring 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1080744 ']' 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1080744 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1080744 ']' 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1080744 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1080744 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1080744' 00:17:48.111 killing process with pid 1080744 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1080744 00:17:48.111 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1080744 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.371 00:29:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.281 00:29:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:50.281 00:17:50.281 real 0m12.860s 00:17:50.281 user 0m13.978s 00:17:50.281 sys 0m6.581s 00:17:50.281 00:29:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.281 00:29:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.281 ************************************ 00:17:50.281 END TEST nvmf_bdevio 00:17:50.281 ************************************ 00:17:50.281 00:29:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:50.281 00:29:03 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:50.281 00:29:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:50.281 00:29:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.281 00:29:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:50.542 ************************************ 00:17:50.542 START TEST nvmf_auth_target 00:17:50.542 ************************************ 00:17:50.542 00:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:50.542 * Looking for test storage... 00:17:50.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.542 00:29:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:50.543 00:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.685 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:58.686 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:58.686 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:58.686 Found net devices under 0000:31:00.0: cvl_0_0 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:58.686 Found net devices under 0000:31:00.1: cvl_0_1 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.686 00:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:58.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:17:58.686 00:17:58.686 --- 10.0.0.2 ping statistics --- 00:17:58.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.686 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:17:58.686 00:17:58.686 --- 10.0.0.1 ping statistics --- 00:17:58.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.686 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1086033 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1086033 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1086033 ']' 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.686 00:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.627 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.627 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:59.627 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.627 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:59.627 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1086064 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0ac329c58b1e0471e41da184a5ba887791c7fe05ce71d3b2 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.k9k 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0ac329c58b1e0471e41da184a5ba887791c7fe05ce71d3b2 0 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0ac329c58b1e0471e41da184a5ba887791c7fe05ce71d3b2 0 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0ac329c58b1e0471e41da184a5ba887791c7fe05ce71d3b2 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.k9k 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.k9k 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.k9k 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed6081cd0fa68b19a8f321f93715eb38642d671c35e974bbf971d70aa5b1e851 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.UCU 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed6081cd0fa68b19a8f321f93715eb38642d671c35e974bbf971d70aa5b1e851 3 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed6081cd0fa68b19a8f321f93715eb38642d671c35e974bbf971d70aa5b1e851 3 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed6081cd0fa68b19a8f321f93715eb38642d671c35e974bbf971d70aa5b1e851 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.UCU 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.UCU 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.UCU 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=95b762c6c89d464d9a7e988ac1f465d1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vnc 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 95b762c6c89d464d9a7e988ac1f465d1 1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 95b762c6c89d464d9a7e988ac1f465d1 1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=95b762c6c89d464d9a7e988ac1f465d1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vnc 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vnc 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.vnc 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:59.628 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=192685c1e6798733b6ea27021c156804e1ed90e54679d2f0 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DsZ 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 192685c1e6798733b6ea27021c156804e1ed90e54679d2f0 2 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 192685c1e6798733b6ea27021c156804e1ed90e54679d2f0 2 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=192685c1e6798733b6ea27021c156804e1ed90e54679d2f0 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DsZ 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DsZ 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.DsZ 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=adcd943c2b113eb96760d17cb68093dc3a804ea5274228be 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.w1k 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key adcd943c2b113eb96760d17cb68093dc3a804ea5274228be 2 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 adcd943c2b113eb96760d17cb68093dc3a804ea5274228be 2 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=adcd943c2b113eb96760d17cb68093dc3a804ea5274228be 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.w1k 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.w1k 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.w1k 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b17b9e31b39b3541fd2df9606f7e1ca4 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Gck 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b17b9e31b39b3541fd2df9606f7e1ca4 1 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b17b9e31b39b3541fd2df9606f7e1ca4 1 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b17b9e31b39b3541fd2df9606f7e1ca4 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Gck 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Gck 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Gck 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=83801c742faf07dabfb4f75b198699c0e0d85f71bdf190786076e05324821085 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.W11 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 83801c742faf07dabfb4f75b198699c0e0d85f71bdf190786076e05324821085 3 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 83801c742faf07dabfb4f75b198699c0e0d85f71bdf190786076e05324821085 3 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=83801c742faf07dabfb4f75b198699c0e0d85f71bdf190786076e05324821085 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.W11 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.W11 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.W11 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1086033 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1086033 ']' 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.888 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1086064 /var/tmp/host.sock 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1086064 ']' 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:00.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.148 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k9k 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.k9k 00:18:00.408 00:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.k9k 00:18:00.408 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.UCU ]] 00:18:00.408 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UCU 00:18:00.408 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.408 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.408 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.408 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UCU 00:18:00.408 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UCU 00:18:00.668 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:00.668 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vnc 00:18:00.668 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.668 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.669 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.669 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vnc 00:18:00.669 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vnc 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.DsZ ]] 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DsZ 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DsZ 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DsZ 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.w1k 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.929 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.w1k 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.w1k 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Gck ]] 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gck 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gck 00:18:01.190 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gck 00:18:01.451 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:01.451 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.W11 00:18:01.451 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.451 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.451 00:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.451 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.W11 00:18:01.451 00:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.W11 00:18:01.451 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:01.451 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:01.451 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.451 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.451 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.712 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.973 00:18:01.973 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.973 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.973 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.234 { 00:18:02.234 "cntlid": 1, 00:18:02.234 "qid": 0, 00:18:02.234 "state": "enabled", 00:18:02.234 "thread": "nvmf_tgt_poll_group_000", 00:18:02.234 "listen_address": { 00:18:02.234 "trtype": "TCP", 00:18:02.234 "adrfam": "IPv4", 00:18:02.234 "traddr": "10.0.0.2", 00:18:02.234 "trsvcid": "4420" 00:18:02.234 }, 00:18:02.234 "peer_address": { 00:18:02.234 "trtype": "TCP", 00:18:02.234 "adrfam": "IPv4", 00:18:02.234 "traddr": "10.0.0.1", 00:18:02.234 "trsvcid": "32874" 00:18:02.234 }, 00:18:02.234 "auth": { 00:18:02.234 "state": "completed", 00:18:02.234 "digest": "sha256", 00:18:02.234 "dhgroup": "null" 00:18:02.234 } 00:18:02.234 } 00:18:02.234 ]' 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.234 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.495 00:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.065 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.326 00:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.586 00:18:03.586 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.586 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.586 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.846 { 00:18:03.846 "cntlid": 3, 00:18:03.846 "qid": 0, 00:18:03.846 "state": "enabled", 00:18:03.846 "thread": "nvmf_tgt_poll_group_000", 00:18:03.846 "listen_address": { 00:18:03.846 "trtype": "TCP", 00:18:03.846 "adrfam": "IPv4", 00:18:03.846 "traddr": "10.0.0.2", 00:18:03.846 "trsvcid": "4420" 00:18:03.846 }, 00:18:03.846 "peer_address": { 00:18:03.846 "trtype": "TCP", 00:18:03.846 "adrfam": "IPv4", 00:18:03.846 "traddr": "10.0.0.1", 00:18:03.846 "trsvcid": "32910" 00:18:03.846 }, 00:18:03.846 "auth": { 00:18:03.846 "state": "completed", 00:18:03.846 "digest": "sha256", 00:18:03.846 "dhgroup": "null" 00:18:03.846 } 00:18:03.846 } 00:18:03.846 ]' 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.846 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.106 00:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.674 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.933 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.192 00:18:05.192 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.192 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.192 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.452 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.452 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.452 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.452 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.452 00:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.452 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.452 { 00:18:05.452 "cntlid": 5, 00:18:05.452 "qid": 0, 00:18:05.452 "state": "enabled", 00:18:05.452 "thread": "nvmf_tgt_poll_group_000", 00:18:05.452 "listen_address": { 00:18:05.452 "trtype": "TCP", 00:18:05.452 "adrfam": "IPv4", 00:18:05.452 "traddr": "10.0.0.2", 00:18:05.452 "trsvcid": "4420" 00:18:05.452 }, 00:18:05.452 "peer_address": { 00:18:05.452 "trtype": "TCP", 00:18:05.452 "adrfam": "IPv4", 00:18:05.452 "traddr": "10.0.0.1", 00:18:05.452 "trsvcid": "32934" 00:18:05.452 }, 00:18:05.452 "auth": { 00:18:05.452 "state": "completed", 00:18:05.452 "digest": "sha256", 00:18:05.452 "dhgroup": "null" 00:18:05.452 } 00:18:05.452 } 00:18:05.452 ]' 00:18:05.452 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.453 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.453 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.453 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.453 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.453 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.453 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.453 00:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.713 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.282 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.542 00:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.542 00:18:06.542 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.542 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.542 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.801 { 00:18:06.801 "cntlid": 7, 00:18:06.801 "qid": 0, 00:18:06.801 "state": "enabled", 00:18:06.801 "thread": "nvmf_tgt_poll_group_000", 00:18:06.801 "listen_address": { 00:18:06.801 "trtype": "TCP", 00:18:06.801 "adrfam": "IPv4", 00:18:06.801 "traddr": "10.0.0.2", 00:18:06.801 "trsvcid": "4420" 00:18:06.801 }, 00:18:06.801 "peer_address": { 00:18:06.801 "trtype": "TCP", 00:18:06.801 "adrfam": "IPv4", 00:18:06.801 "traddr": "10.0.0.1", 00:18:06.801 "trsvcid": "32952" 00:18:06.801 }, 00:18:06.801 "auth": { 00:18:06.801 "state": "completed", 00:18:06.801 "digest": "sha256", 00:18:06.801 "dhgroup": "null" 00:18:06.801 } 00:18:06.801 } 00:18:06.801 ]' 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.801 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.060 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.060 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.060 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.061 00:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:07.631 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.891 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.152 00:18:08.152 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.152 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.152 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.152 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.152 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.152 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.152 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.414 { 00:18:08.414 "cntlid": 9, 00:18:08.414 "qid": 0, 00:18:08.414 "state": "enabled", 00:18:08.414 "thread": "nvmf_tgt_poll_group_000", 00:18:08.414 "listen_address": { 00:18:08.414 "trtype": "TCP", 00:18:08.414 "adrfam": "IPv4", 00:18:08.414 "traddr": "10.0.0.2", 00:18:08.414 "trsvcid": "4420" 00:18:08.414 }, 00:18:08.414 "peer_address": { 00:18:08.414 "trtype": "TCP", 00:18:08.414 "adrfam": "IPv4", 00:18:08.414 "traddr": "10.0.0.1", 00:18:08.414 "trsvcid": "32976" 00:18:08.414 }, 00:18:08.414 "auth": { 00:18:08.414 "state": "completed", 00:18:08.414 "digest": "sha256", 00:18:08.414 "dhgroup": "ffdhe2048" 00:18:08.414 } 00:18:08.414 } 00:18:08.414 ]' 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.414 00:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.675 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.245 00:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.506 00:18:09.506 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.506 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.506 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.766 { 00:18:09.766 "cntlid": 11, 00:18:09.766 "qid": 0, 00:18:09.766 "state": "enabled", 00:18:09.766 "thread": "nvmf_tgt_poll_group_000", 00:18:09.766 "listen_address": { 00:18:09.766 "trtype": "TCP", 00:18:09.766 "adrfam": "IPv4", 00:18:09.766 "traddr": "10.0.0.2", 00:18:09.766 "trsvcid": "4420" 00:18:09.766 }, 00:18:09.766 "peer_address": { 00:18:09.766 "trtype": "TCP", 00:18:09.766 "adrfam": "IPv4", 00:18:09.766 "traddr": "10.0.0.1", 00:18:09.766 "trsvcid": "33004" 00:18:09.766 }, 00:18:09.766 "auth": { 00:18:09.766 "state": "completed", 00:18:09.766 "digest": "sha256", 00:18:09.766 "dhgroup": "ffdhe2048" 00:18:09.766 } 00:18:09.766 } 00:18:09.766 ]' 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.766 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.027 00:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.968 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.228 00:18:11.228 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.229 { 00:18:11.229 "cntlid": 13, 00:18:11.229 "qid": 0, 00:18:11.229 "state": "enabled", 00:18:11.229 "thread": "nvmf_tgt_poll_group_000", 00:18:11.229 "listen_address": { 00:18:11.229 "trtype": "TCP", 00:18:11.229 "adrfam": "IPv4", 00:18:11.229 "traddr": "10.0.0.2", 00:18:11.229 "trsvcid": "4420" 00:18:11.229 }, 00:18:11.229 "peer_address": { 00:18:11.229 "trtype": "TCP", 00:18:11.229 "adrfam": "IPv4", 00:18:11.229 "traddr": "10.0.0.1", 00:18:11.229 "trsvcid": "38474" 00:18:11.229 }, 00:18:11.229 "auth": { 00:18:11.229 "state": "completed", 00:18:11.229 "digest": "sha256", 00:18:11.229 "dhgroup": "ffdhe2048" 00:18:11.229 } 00:18:11.229 } 00:18:11.229 ]' 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.229 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.490 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.490 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.490 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.490 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.490 00:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.490 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.432 00:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.693 00:18:12.693 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.693 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.693 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.953 { 00:18:12.953 "cntlid": 15, 00:18:12.953 "qid": 0, 00:18:12.953 "state": "enabled", 00:18:12.953 "thread": "nvmf_tgt_poll_group_000", 00:18:12.953 "listen_address": { 00:18:12.953 "trtype": "TCP", 00:18:12.953 "adrfam": "IPv4", 00:18:12.953 "traddr": "10.0.0.2", 00:18:12.953 "trsvcid": "4420" 00:18:12.953 }, 00:18:12.953 "peer_address": { 00:18:12.953 "trtype": "TCP", 00:18:12.953 "adrfam": "IPv4", 00:18:12.953 "traddr": "10.0.0.1", 00:18:12.953 "trsvcid": "38512" 00:18:12.953 }, 00:18:12.953 "auth": { 00:18:12.953 "state": "completed", 00:18:12.953 "digest": "sha256", 00:18:12.953 "dhgroup": "ffdhe2048" 00:18:12.953 } 00:18:12.953 } 00:18:12.953 ]' 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.953 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.212 00:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.781 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.041 00:18:14.041 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.041 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.041 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.339 { 00:18:14.339 "cntlid": 17, 00:18:14.339 "qid": 0, 00:18:14.339 "state": "enabled", 00:18:14.339 "thread": "nvmf_tgt_poll_group_000", 00:18:14.339 "listen_address": { 00:18:14.339 "trtype": "TCP", 00:18:14.339 "adrfam": "IPv4", 00:18:14.339 "traddr": "10.0.0.2", 00:18:14.339 "trsvcid": "4420" 00:18:14.339 }, 00:18:14.339 "peer_address": { 00:18:14.339 "trtype": "TCP", 00:18:14.339 "adrfam": "IPv4", 00:18:14.339 "traddr": "10.0.0.1", 00:18:14.339 "trsvcid": "38538" 00:18:14.339 }, 00:18:14.339 "auth": { 00:18:14.339 "state": "completed", 00:18:14.339 "digest": "sha256", 00:18:14.339 "dhgroup": "ffdhe3072" 00:18:14.339 } 00:18:14.339 } 00:18:14.339 ]' 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.339 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.636 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.636 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.636 00:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.636 00:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:15.574 00:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.574 00:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.574 00:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.574 00:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.575 00:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.575 00:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.575 00:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:15.575 00:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.575 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.834 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.834 { 00:18:15.834 "cntlid": 19, 00:18:15.834 "qid": 0, 00:18:15.834 "state": "enabled", 00:18:15.834 "thread": "nvmf_tgt_poll_group_000", 00:18:15.834 "listen_address": { 00:18:15.834 "trtype": "TCP", 00:18:15.834 "adrfam": "IPv4", 00:18:15.834 "traddr": "10.0.0.2", 00:18:15.834 "trsvcid": "4420" 00:18:15.834 }, 00:18:15.834 "peer_address": { 00:18:15.834 "trtype": "TCP", 00:18:15.834 "adrfam": "IPv4", 00:18:15.834 "traddr": "10.0.0.1", 00:18:15.834 "trsvcid": "38566" 00:18:15.834 }, 00:18:15.834 "auth": { 00:18:15.834 "state": "completed", 00:18:15.834 "digest": "sha256", 00:18:15.834 "dhgroup": "ffdhe3072" 00:18:15.834 } 00:18:15.834 } 00:18:15.834 ]' 00:18:15.834 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.094 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.094 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.094 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.094 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.094 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.094 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.094 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.354 00:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:16.925 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.186 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.446 00:18:17.446 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.446 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.446 00:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.446 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.446 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.446 00:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.446 00:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.707 { 00:18:17.707 "cntlid": 21, 00:18:17.707 "qid": 0, 00:18:17.707 "state": "enabled", 00:18:17.707 "thread": "nvmf_tgt_poll_group_000", 00:18:17.707 "listen_address": { 00:18:17.707 "trtype": "TCP", 00:18:17.707 "adrfam": "IPv4", 00:18:17.707 "traddr": "10.0.0.2", 00:18:17.707 "trsvcid": "4420" 00:18:17.707 }, 00:18:17.707 "peer_address": { 00:18:17.707 "trtype": "TCP", 00:18:17.707 "adrfam": "IPv4", 00:18:17.707 "traddr": "10.0.0.1", 00:18:17.707 "trsvcid": "38592" 00:18:17.707 }, 00:18:17.707 "auth": { 00:18:17.707 "state": "completed", 00:18:17.707 "digest": "sha256", 00:18:17.707 "dhgroup": "ffdhe3072" 00:18:17.707 } 00:18:17.707 } 00:18:17.707 ]' 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.707 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.967 00:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.537 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.797 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.058 00:18:19.058 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.058 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.058 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.318 { 00:18:19.318 "cntlid": 23, 00:18:19.318 "qid": 0, 00:18:19.318 "state": "enabled", 00:18:19.318 "thread": "nvmf_tgt_poll_group_000", 00:18:19.318 "listen_address": { 00:18:19.318 "trtype": "TCP", 00:18:19.318 "adrfam": "IPv4", 00:18:19.318 "traddr": "10.0.0.2", 00:18:19.318 "trsvcid": "4420" 00:18:19.318 }, 00:18:19.318 "peer_address": { 00:18:19.318 "trtype": "TCP", 00:18:19.318 "adrfam": "IPv4", 00:18:19.318 "traddr": "10.0.0.1", 00:18:19.318 "trsvcid": "38610" 00:18:19.318 }, 00:18:19.318 "auth": { 00:18:19.318 "state": "completed", 00:18:19.318 "digest": "sha256", 00:18:19.318 "dhgroup": "ffdhe3072" 00:18:19.318 } 00:18:19.318 } 00:18:19.318 ]' 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.318 00:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.578 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.147 00:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.406 00:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.406 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.406 00:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.406 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.666 { 00:18:20.666 "cntlid": 25, 00:18:20.666 "qid": 0, 00:18:20.666 "state": "enabled", 00:18:20.666 "thread": "nvmf_tgt_poll_group_000", 00:18:20.666 "listen_address": { 00:18:20.666 "trtype": "TCP", 00:18:20.666 "adrfam": "IPv4", 00:18:20.666 "traddr": "10.0.0.2", 00:18:20.666 "trsvcid": "4420" 00:18:20.666 }, 00:18:20.666 "peer_address": { 00:18:20.666 "trtype": "TCP", 00:18:20.666 "adrfam": "IPv4", 00:18:20.666 "traddr": "10.0.0.1", 00:18:20.666 "trsvcid": "39748" 00:18:20.666 }, 00:18:20.666 "auth": { 00:18:20.666 "state": "completed", 00:18:20.666 "digest": "sha256", 00:18:20.666 "dhgroup": "ffdhe4096" 00:18:20.666 } 00:18:20.666 } 00:18:20.666 ]' 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.666 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.927 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.927 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.927 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.927 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.927 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.927 00:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:21.497 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.497 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.497 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.497 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.757 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.018 00:18:22.018 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.018 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.018 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.278 { 00:18:22.278 "cntlid": 27, 00:18:22.278 "qid": 0, 00:18:22.278 "state": "enabled", 00:18:22.278 "thread": "nvmf_tgt_poll_group_000", 00:18:22.278 "listen_address": { 00:18:22.278 "trtype": "TCP", 00:18:22.278 "adrfam": "IPv4", 00:18:22.278 "traddr": "10.0.0.2", 00:18:22.278 "trsvcid": "4420" 00:18:22.278 }, 00:18:22.278 "peer_address": { 00:18:22.278 "trtype": "TCP", 00:18:22.278 "adrfam": "IPv4", 00:18:22.278 "traddr": "10.0.0.1", 00:18:22.278 "trsvcid": "39760" 00:18:22.278 }, 00:18:22.278 "auth": { 00:18:22.278 "state": "completed", 00:18:22.278 "digest": "sha256", 00:18:22.278 "dhgroup": "ffdhe4096" 00:18:22.278 } 00:18:22.278 } 00:18:22.278 ]' 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.278 00:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.538 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:23.474 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.474 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.474 00:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.474 00:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.475 00:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.734 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.734 { 00:18:23.734 "cntlid": 29, 00:18:23.734 "qid": 0, 00:18:23.734 "state": "enabled", 00:18:23.734 "thread": "nvmf_tgt_poll_group_000", 00:18:23.734 "listen_address": { 00:18:23.734 "trtype": "TCP", 00:18:23.734 "adrfam": "IPv4", 00:18:23.734 "traddr": "10.0.0.2", 00:18:23.734 "trsvcid": "4420" 00:18:23.734 }, 00:18:23.734 "peer_address": { 00:18:23.734 "trtype": "TCP", 00:18:23.734 "adrfam": "IPv4", 00:18:23.734 "traddr": "10.0.0.1", 00:18:23.734 "trsvcid": "39788" 00:18:23.734 }, 00:18:23.734 "auth": { 00:18:23.734 "state": "completed", 00:18:23.734 "digest": "sha256", 00:18:23.734 "dhgroup": "ffdhe4096" 00:18:23.734 } 00:18:23.734 } 00:18:23.734 ]' 00:18:23.734 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.993 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.993 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.993 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.993 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.993 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.993 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.993 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.252 00:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.820 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.080 00:18:25.080 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.080 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.080 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.339 { 00:18:25.339 "cntlid": 31, 00:18:25.339 "qid": 0, 00:18:25.339 "state": "enabled", 00:18:25.339 "thread": "nvmf_tgt_poll_group_000", 00:18:25.339 "listen_address": { 00:18:25.339 "trtype": "TCP", 00:18:25.339 "adrfam": "IPv4", 00:18:25.339 "traddr": "10.0.0.2", 00:18:25.339 "trsvcid": "4420" 00:18:25.339 }, 00:18:25.339 "peer_address": { 00:18:25.339 "trtype": "TCP", 00:18:25.339 "adrfam": "IPv4", 00:18:25.339 "traddr": "10.0.0.1", 00:18:25.339 "trsvcid": "39806" 00:18:25.339 }, 00:18:25.339 "auth": { 00:18:25.339 "state": "completed", 00:18:25.339 "digest": "sha256", 00:18:25.339 "dhgroup": "ffdhe4096" 00:18:25.339 } 00:18:25.339 } 00:18:25.339 ]' 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.339 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.599 00:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.599 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.599 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.599 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:26.169 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.428 00:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.428 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.000 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.000 { 00:18:27.000 "cntlid": 33, 00:18:27.000 "qid": 0, 00:18:27.000 "state": "enabled", 00:18:27.000 "thread": "nvmf_tgt_poll_group_000", 00:18:27.000 "listen_address": { 00:18:27.000 "trtype": "TCP", 00:18:27.000 "adrfam": "IPv4", 00:18:27.000 "traddr": "10.0.0.2", 00:18:27.000 "trsvcid": "4420" 00:18:27.000 }, 00:18:27.000 "peer_address": { 00:18:27.000 "trtype": "TCP", 00:18:27.000 "adrfam": "IPv4", 00:18:27.000 "traddr": "10.0.0.1", 00:18:27.000 "trsvcid": "39846" 00:18:27.000 }, 00:18:27.000 "auth": { 00:18:27.000 "state": "completed", 00:18:27.000 "digest": "sha256", 00:18:27.000 "dhgroup": "ffdhe6144" 00:18:27.000 } 00:18:27.000 } 00:18:27.000 ]' 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.000 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.260 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.260 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.260 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.260 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.260 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.260 00:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.202 00:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.485 00:18:28.485 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.485 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.485 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.745 { 00:18:28.745 "cntlid": 35, 00:18:28.745 "qid": 0, 00:18:28.745 "state": "enabled", 00:18:28.745 "thread": "nvmf_tgt_poll_group_000", 00:18:28.745 "listen_address": { 00:18:28.745 "trtype": "TCP", 00:18:28.745 "adrfam": "IPv4", 00:18:28.745 "traddr": "10.0.0.2", 00:18:28.745 "trsvcid": "4420" 00:18:28.745 }, 00:18:28.745 "peer_address": { 00:18:28.745 "trtype": "TCP", 00:18:28.745 "adrfam": "IPv4", 00:18:28.745 "traddr": "10.0.0.1", 00:18:28.745 "trsvcid": "39888" 00:18:28.745 }, 00:18:28.745 "auth": { 00:18:28.745 "state": "completed", 00:18:28.745 "digest": "sha256", 00:18:28.745 "dhgroup": "ffdhe6144" 00:18:28.745 } 00:18:28.745 } 00:18:28.745 ]' 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.745 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.005 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.005 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.005 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.005 00:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.945 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.206 00:18:30.206 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.206 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.206 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.467 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.467 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.467 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.467 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.467 00:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.467 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.467 { 00:18:30.467 "cntlid": 37, 00:18:30.467 "qid": 0, 00:18:30.467 "state": "enabled", 00:18:30.467 "thread": "nvmf_tgt_poll_group_000", 00:18:30.467 "listen_address": { 00:18:30.467 "trtype": "TCP", 00:18:30.467 "adrfam": "IPv4", 00:18:30.467 "traddr": "10.0.0.2", 00:18:30.467 "trsvcid": "4420" 00:18:30.467 }, 00:18:30.467 "peer_address": { 00:18:30.467 "trtype": "TCP", 00:18:30.467 "adrfam": "IPv4", 00:18:30.467 "traddr": "10.0.0.1", 00:18:30.467 "trsvcid": "45562" 00:18:30.467 }, 00:18:30.467 "auth": { 00:18:30.467 "state": "completed", 00:18:30.467 "digest": "sha256", 00:18:30.467 "dhgroup": "ffdhe6144" 00:18:30.467 } 00:18:30.467 } 00:18:30.467 ]' 00:18:30.467 00:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.467 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.467 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.467 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.467 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.467 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.467 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.467 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.727 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.670 00:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.670 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.929 00:18:31.929 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.929 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.929 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.189 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.189 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.189 00:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.189 00:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.190 { 00:18:32.190 "cntlid": 39, 00:18:32.190 "qid": 0, 00:18:32.190 "state": "enabled", 00:18:32.190 "thread": "nvmf_tgt_poll_group_000", 00:18:32.190 "listen_address": { 00:18:32.190 "trtype": "TCP", 00:18:32.190 "adrfam": "IPv4", 00:18:32.190 "traddr": "10.0.0.2", 00:18:32.190 "trsvcid": "4420" 00:18:32.190 }, 00:18:32.190 "peer_address": { 00:18:32.190 "trtype": "TCP", 00:18:32.190 "adrfam": "IPv4", 00:18:32.190 "traddr": "10.0.0.1", 00:18:32.190 "trsvcid": "45594" 00:18:32.190 }, 00:18:32.190 "auth": { 00:18:32.190 "state": "completed", 00:18:32.190 "digest": "sha256", 00:18:32.190 "dhgroup": "ffdhe6144" 00:18:32.190 } 00:18:32.190 } 00:18:32.190 ]' 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.190 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.449 00:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.020 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.280 00:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.850 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.850 { 00:18:33.850 "cntlid": 41, 00:18:33.850 "qid": 0, 00:18:33.850 "state": "enabled", 00:18:33.850 "thread": "nvmf_tgt_poll_group_000", 00:18:33.850 "listen_address": { 00:18:33.850 "trtype": "TCP", 00:18:33.850 "adrfam": "IPv4", 00:18:33.850 "traddr": "10.0.0.2", 00:18:33.850 "trsvcid": "4420" 00:18:33.850 }, 00:18:33.850 "peer_address": { 00:18:33.850 "trtype": "TCP", 00:18:33.850 "adrfam": "IPv4", 00:18:33.850 "traddr": "10.0.0.1", 00:18:33.850 "trsvcid": "45632" 00:18:33.850 }, 00:18:33.850 "auth": { 00:18:33.850 "state": "completed", 00:18:33.850 "digest": "sha256", 00:18:33.850 "dhgroup": "ffdhe8192" 00:18:33.850 } 00:18:33.850 } 00:18:33.850 ]' 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.850 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.109 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.109 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.109 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.109 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.109 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.109 00:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:34.679 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.679 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.679 00:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.679 00:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.939 00:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.508 00:18:35.508 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.508 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.508 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.767 { 00:18:35.767 "cntlid": 43, 00:18:35.767 "qid": 0, 00:18:35.767 "state": "enabled", 00:18:35.767 "thread": "nvmf_tgt_poll_group_000", 00:18:35.767 "listen_address": { 00:18:35.767 "trtype": "TCP", 00:18:35.767 "adrfam": "IPv4", 00:18:35.767 "traddr": "10.0.0.2", 00:18:35.767 "trsvcid": "4420" 00:18:35.767 }, 00:18:35.767 "peer_address": { 00:18:35.767 "trtype": "TCP", 00:18:35.767 "adrfam": "IPv4", 00:18:35.767 "traddr": "10.0.0.1", 00:18:35.767 "trsvcid": "45660" 00:18:35.767 }, 00:18:35.767 "auth": { 00:18:35.767 "state": "completed", 00:18:35.767 "digest": "sha256", 00:18:35.767 "dhgroup": "ffdhe8192" 00:18:35.767 } 00:18:35.767 } 00:18:35.767 ]' 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.767 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.768 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.768 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.768 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.768 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.027 00:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:36.599 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.860 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.429 00:18:37.429 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.429 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.429 00:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.690 { 00:18:37.690 "cntlid": 45, 00:18:37.690 "qid": 0, 00:18:37.690 "state": "enabled", 00:18:37.690 "thread": "nvmf_tgt_poll_group_000", 00:18:37.690 "listen_address": { 00:18:37.690 "trtype": "TCP", 00:18:37.690 "adrfam": "IPv4", 00:18:37.690 "traddr": "10.0.0.2", 00:18:37.690 "trsvcid": "4420" 00:18:37.690 }, 00:18:37.690 "peer_address": { 00:18:37.690 "trtype": "TCP", 00:18:37.690 "adrfam": "IPv4", 00:18:37.690 "traddr": "10.0.0.1", 00:18:37.690 "trsvcid": "45676" 00:18:37.690 }, 00:18:37.690 "auth": { 00:18:37.690 "state": "completed", 00:18:37.690 "digest": "sha256", 00:18:37.690 "dhgroup": "ffdhe8192" 00:18:37.690 } 00:18:37.690 } 00:18:37.690 ]' 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.690 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.951 00:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.522 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.783 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.354 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.354 { 00:18:39.354 "cntlid": 47, 00:18:39.354 "qid": 0, 00:18:39.354 "state": "enabled", 00:18:39.354 "thread": "nvmf_tgt_poll_group_000", 00:18:39.354 "listen_address": { 00:18:39.354 "trtype": "TCP", 00:18:39.354 "adrfam": "IPv4", 00:18:39.354 "traddr": "10.0.0.2", 00:18:39.354 "trsvcid": "4420" 00:18:39.354 }, 00:18:39.354 "peer_address": { 00:18:39.354 "trtype": "TCP", 00:18:39.354 "adrfam": "IPv4", 00:18:39.354 "traddr": "10.0.0.1", 00:18:39.354 "trsvcid": "45708" 00:18:39.354 }, 00:18:39.354 "auth": { 00:18:39.354 "state": "completed", 00:18:39.354 "digest": "sha256", 00:18:39.354 "dhgroup": "ffdhe8192" 00:18:39.354 } 00:18:39.354 } 00:18:39.354 ]' 00:18:39.354 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.615 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.615 00:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.615 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.615 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.615 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.615 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.615 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.874 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.445 00:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.705 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.705 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.965 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.965 { 00:18:40.965 "cntlid": 49, 00:18:40.965 "qid": 0, 00:18:40.965 "state": "enabled", 00:18:40.965 "thread": "nvmf_tgt_poll_group_000", 00:18:40.965 "listen_address": { 00:18:40.965 "trtype": "TCP", 00:18:40.966 "adrfam": "IPv4", 00:18:40.966 "traddr": "10.0.0.2", 00:18:40.966 "trsvcid": "4420" 00:18:40.966 }, 00:18:40.966 "peer_address": { 00:18:40.966 "trtype": "TCP", 00:18:40.966 "adrfam": "IPv4", 00:18:40.966 "traddr": "10.0.0.1", 00:18:40.966 "trsvcid": "38986" 00:18:40.966 }, 00:18:40.966 "auth": { 00:18:40.966 "state": "completed", 00:18:40.966 "digest": "sha384", 00:18:40.966 "dhgroup": "null" 00:18:40.966 } 00:18:40.966 } 00:18:40.966 ]' 00:18:40.966 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.966 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.966 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.226 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.226 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.226 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.226 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.226 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.226 00:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:41.803 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.063 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.323 00:18:42.323 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.323 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.323 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.323 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.323 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.323 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.323 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.584 00:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.584 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.584 { 00:18:42.584 "cntlid": 51, 00:18:42.584 "qid": 0, 00:18:42.584 "state": "enabled", 00:18:42.584 "thread": "nvmf_tgt_poll_group_000", 00:18:42.584 "listen_address": { 00:18:42.584 "trtype": "TCP", 00:18:42.584 "adrfam": "IPv4", 00:18:42.584 "traddr": "10.0.0.2", 00:18:42.584 "trsvcid": "4420" 00:18:42.584 }, 00:18:42.584 "peer_address": { 00:18:42.584 "trtype": "TCP", 00:18:42.584 "adrfam": "IPv4", 00:18:42.584 "traddr": "10.0.0.1", 00:18:42.584 "trsvcid": "39020" 00:18:42.584 }, 00:18:42.584 "auth": { 00:18:42.584 "state": "completed", 00:18:42.584 "digest": "sha384", 00:18:42.584 "dhgroup": "null" 00:18:42.584 } 00:18:42.584 } 00:18:42.584 ]' 00:18:42.584 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.584 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.584 00:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.584 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:42.584 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.584 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.584 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.584 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.844 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:43.416 00:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.676 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.978 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.978 { 00:18:43.978 "cntlid": 53, 00:18:43.978 "qid": 0, 00:18:43.978 "state": "enabled", 00:18:43.978 "thread": "nvmf_tgt_poll_group_000", 00:18:43.978 "listen_address": { 00:18:43.978 "trtype": "TCP", 00:18:43.978 "adrfam": "IPv4", 00:18:43.978 "traddr": "10.0.0.2", 00:18:43.978 "trsvcid": "4420" 00:18:43.978 }, 00:18:43.978 "peer_address": { 00:18:43.978 "trtype": "TCP", 00:18:43.978 "adrfam": "IPv4", 00:18:43.978 "traddr": "10.0.0.1", 00:18:43.978 "trsvcid": "39062" 00:18:43.978 }, 00:18:43.978 "auth": { 00:18:43.978 "state": "completed", 00:18:43.978 "digest": "sha384", 00:18:43.978 "dhgroup": "null" 00:18:43.978 } 00:18:43.978 } 00:18:43.978 ]' 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.978 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.266 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:44.266 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.266 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.266 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.266 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.266 00:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.204 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.463 00:18:45.463 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.463 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.463 00:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.463 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.463 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.463 00:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.463 00:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.463 00:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.723 { 00:18:45.723 "cntlid": 55, 00:18:45.723 "qid": 0, 00:18:45.723 "state": "enabled", 00:18:45.723 "thread": "nvmf_tgt_poll_group_000", 00:18:45.723 "listen_address": { 00:18:45.723 "trtype": "TCP", 00:18:45.723 "adrfam": "IPv4", 00:18:45.723 "traddr": "10.0.0.2", 00:18:45.723 "trsvcid": "4420" 00:18:45.723 }, 00:18:45.723 "peer_address": { 00:18:45.723 "trtype": "TCP", 00:18:45.723 "adrfam": "IPv4", 00:18:45.723 "traddr": "10.0.0.1", 00:18:45.723 "trsvcid": "39084" 00:18:45.723 }, 00:18:45.723 "auth": { 00:18:45.723 "state": "completed", 00:18:45.723 "digest": "sha384", 00:18:45.723 "dhgroup": "null" 00:18:45.723 } 00:18:45.723 } 00:18:45.723 ]' 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.723 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.983 00:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:46.551 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.811 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.071 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.071 { 00:18:47.071 "cntlid": 57, 00:18:47.071 "qid": 0, 00:18:47.071 "state": "enabled", 00:18:47.071 "thread": "nvmf_tgt_poll_group_000", 00:18:47.071 "listen_address": { 00:18:47.071 "trtype": "TCP", 00:18:47.071 "adrfam": "IPv4", 00:18:47.071 "traddr": "10.0.0.2", 00:18:47.071 "trsvcid": "4420" 00:18:47.071 }, 00:18:47.071 "peer_address": { 00:18:47.071 "trtype": "TCP", 00:18:47.071 "adrfam": "IPv4", 00:18:47.071 "traddr": "10.0.0.1", 00:18:47.071 "trsvcid": "39114" 00:18:47.071 }, 00:18:47.071 "auth": { 00:18:47.071 "state": "completed", 00:18:47.071 "digest": "sha384", 00:18:47.071 "dhgroup": "ffdhe2048" 00:18:47.071 } 00:18:47.071 } 00:18:47.071 ]' 00:18:47.071 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.331 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.331 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.331 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.331 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.331 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.331 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.331 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.591 00:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.161 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.443 00:18:48.443 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.443 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.443 00:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.703 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.703 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.703 00:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.703 00:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 00:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.703 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.703 { 00:18:48.703 "cntlid": 59, 00:18:48.703 "qid": 0, 00:18:48.703 "state": "enabled", 00:18:48.703 "thread": "nvmf_tgt_poll_group_000", 00:18:48.703 "listen_address": { 00:18:48.703 "trtype": "TCP", 00:18:48.703 "adrfam": "IPv4", 00:18:48.703 "traddr": "10.0.0.2", 00:18:48.703 "trsvcid": "4420" 00:18:48.703 }, 00:18:48.703 "peer_address": { 00:18:48.703 "trtype": "TCP", 00:18:48.703 "adrfam": "IPv4", 00:18:48.703 "traddr": "10.0.0.1", 00:18:48.703 "trsvcid": "39144" 00:18:48.703 }, 00:18:48.703 "auth": { 00:18:48.704 "state": "completed", 00:18:48.704 "digest": "sha384", 00:18:48.704 "dhgroup": "ffdhe2048" 00:18:48.704 } 00:18:48.704 } 00:18:48.704 ]' 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.704 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.963 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:49.540 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.541 00:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.541 00:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.541 00:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.541 00:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.541 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.541 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.541 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.800 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.800 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.060 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.061 { 00:18:50.061 "cntlid": 61, 00:18:50.061 "qid": 0, 00:18:50.061 "state": "enabled", 00:18:50.061 "thread": "nvmf_tgt_poll_group_000", 00:18:50.061 "listen_address": { 00:18:50.061 "trtype": "TCP", 00:18:50.061 "adrfam": "IPv4", 00:18:50.061 "traddr": "10.0.0.2", 00:18:50.061 "trsvcid": "4420" 00:18:50.061 }, 00:18:50.061 "peer_address": { 00:18:50.061 "trtype": "TCP", 00:18:50.061 "adrfam": "IPv4", 00:18:50.061 "traddr": "10.0.0.1", 00:18:50.061 "trsvcid": "56014" 00:18:50.061 }, 00:18:50.061 "auth": { 00:18:50.061 "state": "completed", 00:18:50.061 "digest": "sha384", 00:18:50.061 "dhgroup": "ffdhe2048" 00:18:50.061 } 00:18:50.061 } 00:18:50.061 ]' 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.061 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.321 00:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.892 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.151 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.411 00:18:51.411 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.411 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.411 00:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.411 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.411 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.411 00:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.411 00:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.411 00:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.411 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.411 { 00:18:51.411 "cntlid": 63, 00:18:51.411 "qid": 0, 00:18:51.411 "state": "enabled", 00:18:51.411 "thread": "nvmf_tgt_poll_group_000", 00:18:51.411 "listen_address": { 00:18:51.411 "trtype": "TCP", 00:18:51.411 "adrfam": "IPv4", 00:18:51.411 "traddr": "10.0.0.2", 00:18:51.411 "trsvcid": "4420" 00:18:51.411 }, 00:18:51.411 "peer_address": { 00:18:51.411 "trtype": "TCP", 00:18:51.411 "adrfam": "IPv4", 00:18:51.411 "traddr": "10.0.0.1", 00:18:51.411 "trsvcid": "56050" 00:18:51.411 }, 00:18:51.411 "auth": { 00:18:51.411 "state": "completed", 00:18:51.411 "digest": "sha384", 00:18:51.411 "dhgroup": "ffdhe2048" 00:18:51.411 } 00:18:51.411 } 00:18:51.411 ]' 00:18:51.411 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.672 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.672 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.672 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.672 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.672 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.672 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.672 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.932 00:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:52.502 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.762 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.022 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.022 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.022 { 00:18:53.022 "cntlid": 65, 00:18:53.022 "qid": 0, 00:18:53.022 "state": "enabled", 00:18:53.022 "thread": "nvmf_tgt_poll_group_000", 00:18:53.022 "listen_address": { 00:18:53.022 "trtype": "TCP", 00:18:53.022 "adrfam": "IPv4", 00:18:53.022 "traddr": "10.0.0.2", 00:18:53.022 "trsvcid": "4420" 00:18:53.022 }, 00:18:53.022 "peer_address": { 00:18:53.022 "trtype": "TCP", 00:18:53.022 "adrfam": "IPv4", 00:18:53.022 "traddr": "10.0.0.1", 00:18:53.022 "trsvcid": "56074" 00:18:53.022 }, 00:18:53.022 "auth": { 00:18:53.022 "state": "completed", 00:18:53.022 "digest": "sha384", 00:18:53.022 "dhgroup": "ffdhe3072" 00:18:53.022 } 00:18:53.022 } 00:18:53.022 ]' 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.300 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.560 00:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:18:54.129 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.130 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.130 00:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.130 00:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.130 00:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.130 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.130 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:54.130 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.390 00:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.649 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.649 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.649 { 00:18:54.649 "cntlid": 67, 00:18:54.649 "qid": 0, 00:18:54.649 "state": "enabled", 00:18:54.649 "thread": "nvmf_tgt_poll_group_000", 00:18:54.649 "listen_address": { 00:18:54.649 "trtype": "TCP", 00:18:54.649 "adrfam": "IPv4", 00:18:54.649 "traddr": "10.0.0.2", 00:18:54.649 "trsvcid": "4420" 00:18:54.649 }, 00:18:54.649 "peer_address": { 00:18:54.649 "trtype": "TCP", 00:18:54.649 "adrfam": "IPv4", 00:18:54.649 "traddr": "10.0.0.1", 00:18:54.649 "trsvcid": "56108" 00:18:54.649 }, 00:18:54.649 "auth": { 00:18:54.649 "state": "completed", 00:18:54.649 "digest": "sha384", 00:18:54.649 "dhgroup": "ffdhe3072" 00:18:54.649 } 00:18:54.649 } 00:18:54.649 ]' 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.909 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.170 00:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:55.739 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.999 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.259 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.259 { 00:18:56.259 "cntlid": 69, 00:18:56.259 "qid": 0, 00:18:56.259 "state": "enabled", 00:18:56.259 "thread": "nvmf_tgt_poll_group_000", 00:18:56.259 "listen_address": { 00:18:56.259 "trtype": "TCP", 00:18:56.259 "adrfam": "IPv4", 00:18:56.259 "traddr": "10.0.0.2", 00:18:56.259 "trsvcid": "4420" 00:18:56.259 }, 00:18:56.259 "peer_address": { 00:18:56.259 "trtype": "TCP", 00:18:56.259 "adrfam": "IPv4", 00:18:56.259 "traddr": "10.0.0.1", 00:18:56.259 "trsvcid": "56144" 00:18:56.259 }, 00:18:56.259 "auth": { 00:18:56.259 "state": "completed", 00:18:56.259 "digest": "sha384", 00:18:56.259 "dhgroup": "ffdhe3072" 00:18:56.259 } 00:18:56.259 } 00:18:56.259 ]' 00:18:56.259 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.519 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.519 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.519 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.519 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.519 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.519 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.519 00:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.780 00:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.351 00:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.612 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.873 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.873 { 00:18:57.873 "cntlid": 71, 00:18:57.873 "qid": 0, 00:18:57.873 "state": "enabled", 00:18:57.873 "thread": "nvmf_tgt_poll_group_000", 00:18:57.873 "listen_address": { 00:18:57.873 "trtype": "TCP", 00:18:57.873 "adrfam": "IPv4", 00:18:57.873 "traddr": "10.0.0.2", 00:18:57.873 "trsvcid": "4420" 00:18:57.873 }, 00:18:57.873 "peer_address": { 00:18:57.873 "trtype": "TCP", 00:18:57.873 "adrfam": "IPv4", 00:18:57.873 "traddr": "10.0.0.1", 00:18:57.873 "trsvcid": "56170" 00:18:57.873 }, 00:18:57.873 "auth": { 00:18:57.873 "state": "completed", 00:18:57.873 "digest": "sha384", 00:18:57.873 "dhgroup": "ffdhe3072" 00:18:57.873 } 00:18:57.873 } 00:18:57.873 ]' 00:18:57.873 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.133 00:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.076 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.338 00:18:59.338 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.338 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.338 00:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.598 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.598 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.598 00:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.598 00:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.598 00:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.599 { 00:18:59.599 "cntlid": 73, 00:18:59.599 "qid": 0, 00:18:59.599 "state": "enabled", 00:18:59.599 "thread": "nvmf_tgt_poll_group_000", 00:18:59.599 "listen_address": { 00:18:59.599 "trtype": "TCP", 00:18:59.599 "adrfam": "IPv4", 00:18:59.599 "traddr": "10.0.0.2", 00:18:59.599 "trsvcid": "4420" 00:18:59.599 }, 00:18:59.599 "peer_address": { 00:18:59.599 "trtype": "TCP", 00:18:59.599 "adrfam": "IPv4", 00:18:59.599 "traddr": "10.0.0.1", 00:18:59.599 "trsvcid": "56194" 00:18:59.599 }, 00:18:59.599 "auth": { 00:18:59.599 "state": "completed", 00:18:59.599 "digest": "sha384", 00:18:59.599 "dhgroup": "ffdhe4096" 00:18:59.599 } 00:18:59.599 } 00:18:59.599 ]' 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.599 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.859 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:00.429 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.430 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.430 00:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.430 00:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.430 00:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.430 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.430 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.430 00:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.692 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.953 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.953 { 00:19:00.953 "cntlid": 75, 00:19:00.953 "qid": 0, 00:19:00.953 "state": "enabled", 00:19:00.953 "thread": "nvmf_tgt_poll_group_000", 00:19:00.953 "listen_address": { 00:19:00.953 "trtype": "TCP", 00:19:00.953 "adrfam": "IPv4", 00:19:00.953 "traddr": "10.0.0.2", 00:19:00.953 "trsvcid": "4420" 00:19:00.953 }, 00:19:00.953 "peer_address": { 00:19:00.953 "trtype": "TCP", 00:19:00.953 "adrfam": "IPv4", 00:19:00.953 "traddr": "10.0.0.1", 00:19:00.953 "trsvcid": "57622" 00:19:00.953 }, 00:19:00.953 "auth": { 00:19:00.953 "state": "completed", 00:19:00.953 "digest": "sha384", 00:19:00.953 "dhgroup": "ffdhe4096" 00:19:00.953 } 00:19:00.953 } 00:19:00.953 ]' 00:19:00.953 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.214 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.214 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.214 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.214 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.214 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.214 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.214 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.476 00:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.048 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.308 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.567 00:19:02.567 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.567 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.567 00:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.567 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.567 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.567 00:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.567 00:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.567 00:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.567 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.567 { 00:19:02.567 "cntlid": 77, 00:19:02.567 "qid": 0, 00:19:02.567 "state": "enabled", 00:19:02.567 "thread": "nvmf_tgt_poll_group_000", 00:19:02.567 "listen_address": { 00:19:02.567 "trtype": "TCP", 00:19:02.567 "adrfam": "IPv4", 00:19:02.567 "traddr": "10.0.0.2", 00:19:02.567 "trsvcid": "4420" 00:19:02.567 }, 00:19:02.567 "peer_address": { 00:19:02.567 "trtype": "TCP", 00:19:02.567 "adrfam": "IPv4", 00:19:02.567 "traddr": "10.0.0.1", 00:19:02.567 "trsvcid": "57640" 00:19:02.567 }, 00:19:02.567 "auth": { 00:19:02.567 "state": "completed", 00:19:02.567 "digest": "sha384", 00:19:02.567 "dhgroup": "ffdhe4096" 00:19:02.567 } 00:19:02.567 } 00:19:02.567 ]' 00:19:02.567 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.827 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.827 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.827 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.827 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.827 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.827 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.827 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.088 00:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:03.659 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.660 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.660 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:03.660 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.660 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.660 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.660 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.660 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.921 00:19:03.921 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.921 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.921 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.181 { 00:19:04.181 "cntlid": 79, 00:19:04.181 "qid": 0, 00:19:04.181 "state": "enabled", 00:19:04.181 "thread": "nvmf_tgt_poll_group_000", 00:19:04.181 "listen_address": { 00:19:04.181 "trtype": "TCP", 00:19:04.181 "adrfam": "IPv4", 00:19:04.181 "traddr": "10.0.0.2", 00:19:04.181 "trsvcid": "4420" 00:19:04.181 }, 00:19:04.181 "peer_address": { 00:19:04.181 "trtype": "TCP", 00:19:04.181 "adrfam": "IPv4", 00:19:04.181 "traddr": "10.0.0.1", 00:19:04.181 "trsvcid": "57664" 00:19:04.181 }, 00:19:04.181 "auth": { 00:19:04.181 "state": "completed", 00:19:04.181 "digest": "sha384", 00:19:04.181 "dhgroup": "ffdhe4096" 00:19:04.181 } 00:19:04.181 } 00:19:04.181 ]' 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.181 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.442 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.442 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.442 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.442 00:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:05.011 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.272 00:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.531 00:19:05.531 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.531 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.531 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.791 { 00:19:05.791 "cntlid": 81, 00:19:05.791 "qid": 0, 00:19:05.791 "state": "enabled", 00:19:05.791 "thread": "nvmf_tgt_poll_group_000", 00:19:05.791 "listen_address": { 00:19:05.791 "trtype": "TCP", 00:19:05.791 "adrfam": "IPv4", 00:19:05.791 "traddr": "10.0.0.2", 00:19:05.791 "trsvcid": "4420" 00:19:05.791 }, 00:19:05.791 "peer_address": { 00:19:05.791 "trtype": "TCP", 00:19:05.791 "adrfam": "IPv4", 00:19:05.791 "traddr": "10.0.0.1", 00:19:05.791 "trsvcid": "57676" 00:19:05.791 }, 00:19:05.791 "auth": { 00:19:05.791 "state": "completed", 00:19:05.791 "digest": "sha384", 00:19:05.791 "dhgroup": "ffdhe6144" 00:19:05.791 } 00:19:05.791 } 00:19:05.791 ]' 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.791 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.050 00:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:06.621 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.881 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.141 00:19:07.141 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.141 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.141 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.403 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.403 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.403 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.403 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.403 00:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.403 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.403 { 00:19:07.403 "cntlid": 83, 00:19:07.403 "qid": 0, 00:19:07.403 "state": "enabled", 00:19:07.403 "thread": "nvmf_tgt_poll_group_000", 00:19:07.403 "listen_address": { 00:19:07.403 "trtype": "TCP", 00:19:07.403 "adrfam": "IPv4", 00:19:07.403 "traddr": "10.0.0.2", 00:19:07.403 "trsvcid": "4420" 00:19:07.404 }, 00:19:07.404 "peer_address": { 00:19:07.404 "trtype": "TCP", 00:19:07.404 "adrfam": "IPv4", 00:19:07.404 "traddr": "10.0.0.1", 00:19:07.404 "trsvcid": "57702" 00:19:07.404 }, 00:19:07.404 "auth": { 00:19:07.404 "state": "completed", 00:19:07.404 "digest": "sha384", 00:19:07.404 "dhgroup": "ffdhe6144" 00:19:07.404 } 00:19:07.404 } 00:19:07.404 ]' 00:19:07.404 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.404 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.404 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.404 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.404 00:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.404 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.404 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.404 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.664 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.234 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.495 00:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.754 00:19:08.754 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.754 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.754 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.013 { 00:19:09.013 "cntlid": 85, 00:19:09.013 "qid": 0, 00:19:09.013 "state": "enabled", 00:19:09.013 "thread": "nvmf_tgt_poll_group_000", 00:19:09.013 "listen_address": { 00:19:09.013 "trtype": "TCP", 00:19:09.013 "adrfam": "IPv4", 00:19:09.013 "traddr": "10.0.0.2", 00:19:09.013 "trsvcid": "4420" 00:19:09.013 }, 00:19:09.013 "peer_address": { 00:19:09.013 "trtype": "TCP", 00:19:09.013 "adrfam": "IPv4", 00:19:09.013 "traddr": "10.0.0.1", 00:19:09.013 "trsvcid": "57742" 00:19:09.013 }, 00:19:09.013 "auth": { 00:19:09.013 "state": "completed", 00:19:09.013 "digest": "sha384", 00:19:09.013 "dhgroup": "ffdhe6144" 00:19:09.013 } 00:19:09.013 } 00:19:09.013 ]' 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.013 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.284 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.932 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.190 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:10.190 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.190 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.190 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:10.190 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:10.191 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.191 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:10.191 00:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.191 00:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.191 00:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.191 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.191 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.449 00:19:10.449 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.449 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.449 00:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.708 { 00:19:10.708 "cntlid": 87, 00:19:10.708 "qid": 0, 00:19:10.708 "state": "enabled", 00:19:10.708 "thread": "nvmf_tgt_poll_group_000", 00:19:10.708 "listen_address": { 00:19:10.708 "trtype": "TCP", 00:19:10.708 "adrfam": "IPv4", 00:19:10.708 "traddr": "10.0.0.2", 00:19:10.708 "trsvcid": "4420" 00:19:10.708 }, 00:19:10.708 "peer_address": { 00:19:10.708 "trtype": "TCP", 00:19:10.708 "adrfam": "IPv4", 00:19:10.708 "traddr": "10.0.0.1", 00:19:10.708 "trsvcid": "40220" 00:19:10.708 }, 00:19:10.708 "auth": { 00:19:10.708 "state": "completed", 00:19:10.708 "digest": "sha384", 00:19:10.708 "dhgroup": "ffdhe6144" 00:19:10.708 } 00:19:10.708 } 00:19:10.708 ]' 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.708 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.967 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:11.536 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.536 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.536 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.536 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.797 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.366 00:19:12.366 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.366 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.366 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.628 { 00:19:12.628 "cntlid": 89, 00:19:12.628 "qid": 0, 00:19:12.628 "state": "enabled", 00:19:12.628 "thread": "nvmf_tgt_poll_group_000", 00:19:12.628 "listen_address": { 00:19:12.628 "trtype": "TCP", 00:19:12.628 "adrfam": "IPv4", 00:19:12.628 "traddr": "10.0.0.2", 00:19:12.628 "trsvcid": "4420" 00:19:12.628 }, 00:19:12.628 "peer_address": { 00:19:12.628 "trtype": "TCP", 00:19:12.628 "adrfam": "IPv4", 00:19:12.628 "traddr": "10.0.0.1", 00:19:12.628 "trsvcid": "40242" 00:19:12.628 }, 00:19:12.628 "auth": { 00:19:12.628 "state": "completed", 00:19:12.628 "digest": "sha384", 00:19:12.628 "dhgroup": "ffdhe8192" 00:19:12.628 } 00:19:12.628 } 00:19:12.628 ]' 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.628 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.889 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.460 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.720 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:13.720 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.721 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.291 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.291 { 00:19:14.291 "cntlid": 91, 00:19:14.291 "qid": 0, 00:19:14.291 "state": "enabled", 00:19:14.291 "thread": "nvmf_tgt_poll_group_000", 00:19:14.291 "listen_address": { 00:19:14.291 "trtype": "TCP", 00:19:14.291 "adrfam": "IPv4", 00:19:14.291 "traddr": "10.0.0.2", 00:19:14.291 "trsvcid": "4420" 00:19:14.291 }, 00:19:14.291 "peer_address": { 00:19:14.291 "trtype": "TCP", 00:19:14.291 "adrfam": "IPv4", 00:19:14.291 "traddr": "10.0.0.1", 00:19:14.291 "trsvcid": "40272" 00:19:14.291 }, 00:19:14.291 "auth": { 00:19:14.291 "state": "completed", 00:19:14.291 "digest": "sha384", 00:19:14.291 "dhgroup": "ffdhe8192" 00:19:14.291 } 00:19:14.291 } 00:19:14.291 ]' 00:19:14.291 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.552 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.552 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.552 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.552 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.552 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.552 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.552 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.812 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.385 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.645 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.214 00:19:16.214 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.214 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.214 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.215 { 00:19:16.215 "cntlid": 93, 00:19:16.215 "qid": 0, 00:19:16.215 "state": "enabled", 00:19:16.215 "thread": "nvmf_tgt_poll_group_000", 00:19:16.215 "listen_address": { 00:19:16.215 "trtype": "TCP", 00:19:16.215 "adrfam": "IPv4", 00:19:16.215 "traddr": "10.0.0.2", 00:19:16.215 "trsvcid": "4420" 00:19:16.215 }, 00:19:16.215 "peer_address": { 00:19:16.215 "trtype": "TCP", 00:19:16.215 "adrfam": "IPv4", 00:19:16.215 "traddr": "10.0.0.1", 00:19:16.215 "trsvcid": "40318" 00:19:16.215 }, 00:19:16.215 "auth": { 00:19:16.215 "state": "completed", 00:19:16.215 "digest": "sha384", 00:19:16.215 "dhgroup": "ffdhe8192" 00:19:16.215 } 00:19:16.215 } 00:19:16.215 ]' 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.215 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.475 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.475 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.475 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.475 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.475 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.475 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.416 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.987 00:19:17.987 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.987 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.987 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.248 { 00:19:18.248 "cntlid": 95, 00:19:18.248 "qid": 0, 00:19:18.248 "state": "enabled", 00:19:18.248 "thread": "nvmf_tgt_poll_group_000", 00:19:18.248 "listen_address": { 00:19:18.248 "trtype": "TCP", 00:19:18.248 "adrfam": "IPv4", 00:19:18.248 "traddr": "10.0.0.2", 00:19:18.248 "trsvcid": "4420" 00:19:18.248 }, 00:19:18.248 "peer_address": { 00:19:18.248 "trtype": "TCP", 00:19:18.248 "adrfam": "IPv4", 00:19:18.248 "traddr": "10.0.0.1", 00:19:18.248 "trsvcid": "40334" 00:19:18.248 }, 00:19:18.248 "auth": { 00:19:18.248 "state": "completed", 00:19:18.248 "digest": "sha384", 00:19:18.248 "dhgroup": "ffdhe8192" 00:19:18.248 } 00:19:18.248 } 00:19:18.248 ]' 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.248 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.508 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.079 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.340 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.601 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.601 { 00:19:19.601 "cntlid": 97, 00:19:19.601 "qid": 0, 00:19:19.601 "state": "enabled", 00:19:19.601 "thread": "nvmf_tgt_poll_group_000", 00:19:19.601 "listen_address": { 00:19:19.601 "trtype": "TCP", 00:19:19.601 "adrfam": "IPv4", 00:19:19.601 "traddr": "10.0.0.2", 00:19:19.601 "trsvcid": "4420" 00:19:19.601 }, 00:19:19.601 "peer_address": { 00:19:19.601 "trtype": "TCP", 00:19:19.601 "adrfam": "IPv4", 00:19:19.601 "traddr": "10.0.0.1", 00:19:19.601 "trsvcid": "40362" 00:19:19.601 }, 00:19:19.601 "auth": { 00:19:19.601 "state": "completed", 00:19:19.601 "digest": "sha512", 00:19:19.601 "dhgroup": "null" 00:19:19.601 } 00:19:19.601 } 00:19:19.601 ]' 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.601 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.860 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:19.860 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.860 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.860 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.860 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.860 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.800 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.801 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.801 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.801 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.801 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.801 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.062 00:19:21.062 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.062 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.062 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.324 { 00:19:21.324 "cntlid": 99, 00:19:21.324 "qid": 0, 00:19:21.324 "state": "enabled", 00:19:21.324 "thread": "nvmf_tgt_poll_group_000", 00:19:21.324 "listen_address": { 00:19:21.324 "trtype": "TCP", 00:19:21.324 "adrfam": "IPv4", 00:19:21.324 "traddr": "10.0.0.2", 00:19:21.324 "trsvcid": "4420" 00:19:21.324 }, 00:19:21.324 "peer_address": { 00:19:21.324 "trtype": "TCP", 00:19:21.324 "adrfam": "IPv4", 00:19:21.324 "traddr": "10.0.0.1", 00:19:21.324 "trsvcid": "59474" 00:19:21.324 }, 00:19:21.324 "auth": { 00:19:21.324 "state": "completed", 00:19:21.324 "digest": "sha512", 00:19:21.324 "dhgroup": "null" 00:19:21.324 } 00:19:21.324 } 00:19:21.324 ]' 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.324 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.585 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.155 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.415 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.416 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.416 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.675 00:19:22.675 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.675 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.675 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.935 { 00:19:22.935 "cntlid": 101, 00:19:22.935 "qid": 0, 00:19:22.935 "state": "enabled", 00:19:22.935 "thread": "nvmf_tgt_poll_group_000", 00:19:22.935 "listen_address": { 00:19:22.935 "trtype": "TCP", 00:19:22.935 "adrfam": "IPv4", 00:19:22.935 "traddr": "10.0.0.2", 00:19:22.935 "trsvcid": "4420" 00:19:22.935 }, 00:19:22.935 "peer_address": { 00:19:22.935 "trtype": "TCP", 00:19:22.935 "adrfam": "IPv4", 00:19:22.935 "traddr": "10.0.0.1", 00:19:22.935 "trsvcid": "59506" 00:19:22.935 }, 00:19:22.935 "auth": { 00:19:22.935 "state": "completed", 00:19:22.935 "digest": "sha512", 00:19:22.935 "dhgroup": "null" 00:19:22.935 } 00:19:22.935 } 00:19:22.935 ]' 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.935 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.195 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.764 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.023 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.283 00:19:24.283 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.283 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.283 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.542 { 00:19:24.542 "cntlid": 103, 00:19:24.542 "qid": 0, 00:19:24.542 "state": "enabled", 00:19:24.542 "thread": "nvmf_tgt_poll_group_000", 00:19:24.542 "listen_address": { 00:19:24.542 "trtype": "TCP", 00:19:24.542 "adrfam": "IPv4", 00:19:24.542 "traddr": "10.0.0.2", 00:19:24.542 "trsvcid": "4420" 00:19:24.542 }, 00:19:24.542 "peer_address": { 00:19:24.542 "trtype": "TCP", 00:19:24.542 "adrfam": "IPv4", 00:19:24.542 "traddr": "10.0.0.1", 00:19:24.542 "trsvcid": "59536" 00:19:24.542 }, 00:19:24.542 "auth": { 00:19:24.542 "state": "completed", 00:19:24.542 "digest": "sha512", 00:19:24.542 "dhgroup": "null" 00:19:24.542 } 00:19:24.542 } 00:19:24.542 ]' 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.542 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.542 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.542 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.542 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.542 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.542 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.801 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.370 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.630 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.890 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.890 { 00:19:25.890 "cntlid": 105, 00:19:25.890 "qid": 0, 00:19:25.890 "state": "enabled", 00:19:25.890 "thread": "nvmf_tgt_poll_group_000", 00:19:25.890 "listen_address": { 00:19:25.890 "trtype": "TCP", 00:19:25.890 "adrfam": "IPv4", 00:19:25.890 "traddr": "10.0.0.2", 00:19:25.890 "trsvcid": "4420" 00:19:25.890 }, 00:19:25.890 "peer_address": { 00:19:25.890 "trtype": "TCP", 00:19:25.890 "adrfam": "IPv4", 00:19:25.890 "traddr": "10.0.0.1", 00:19:25.890 "trsvcid": "59564" 00:19:25.890 }, 00:19:25.890 "auth": { 00:19:25.890 "state": "completed", 00:19:25.890 "digest": "sha512", 00:19:25.890 "dhgroup": "ffdhe2048" 00:19:25.890 } 00:19:25.890 } 00:19:25.890 ]' 00:19:25.890 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.149 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.149 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.149 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.149 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.149 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.149 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.149 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.409 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:26.980 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.240 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.499 00:19:27.499 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.499 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.499 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.499 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.499 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.499 00:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.499 00:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.499 00:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.499 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.499 { 00:19:27.499 "cntlid": 107, 00:19:27.499 "qid": 0, 00:19:27.499 "state": "enabled", 00:19:27.499 "thread": "nvmf_tgt_poll_group_000", 00:19:27.499 "listen_address": { 00:19:27.499 "trtype": "TCP", 00:19:27.499 "adrfam": "IPv4", 00:19:27.499 "traddr": "10.0.0.2", 00:19:27.499 "trsvcid": "4420" 00:19:27.499 }, 00:19:27.499 "peer_address": { 00:19:27.499 "trtype": "TCP", 00:19:27.499 "adrfam": "IPv4", 00:19:27.499 "traddr": "10.0.0.1", 00:19:27.499 "trsvcid": "59580" 00:19:27.499 }, 00:19:27.499 "auth": { 00:19:27.499 "state": "completed", 00:19:27.499 "digest": "sha512", 00:19:27.499 "dhgroup": "ffdhe2048" 00:19:27.499 } 00:19:27.499 } 00:19:27.499 ]' 00:19:27.499 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.758 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.758 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.758 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.758 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.758 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.758 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.758 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.017 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:28.586 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.846 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.107 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.107 { 00:19:29.107 "cntlid": 109, 00:19:29.107 "qid": 0, 00:19:29.107 "state": "enabled", 00:19:29.107 "thread": "nvmf_tgt_poll_group_000", 00:19:29.107 "listen_address": { 00:19:29.107 "trtype": "TCP", 00:19:29.107 "adrfam": "IPv4", 00:19:29.107 "traddr": "10.0.0.2", 00:19:29.107 "trsvcid": "4420" 00:19:29.107 }, 00:19:29.107 "peer_address": { 00:19:29.107 "trtype": "TCP", 00:19:29.107 "adrfam": "IPv4", 00:19:29.107 "traddr": "10.0.0.1", 00:19:29.107 "trsvcid": "59600" 00:19:29.107 }, 00:19:29.107 "auth": { 00:19:29.107 "state": "completed", 00:19:29.107 "digest": "sha512", 00:19:29.107 "dhgroup": "ffdhe2048" 00:19:29.107 } 00:19:29.107 } 00:19:29.107 ]' 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.107 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.367 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.367 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.367 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.367 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.367 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.367 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.304 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.565 00:19:30.565 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.565 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.565 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.825 { 00:19:30.825 "cntlid": 111, 00:19:30.825 "qid": 0, 00:19:30.825 "state": "enabled", 00:19:30.825 "thread": "nvmf_tgt_poll_group_000", 00:19:30.825 "listen_address": { 00:19:30.825 "trtype": "TCP", 00:19:30.825 "adrfam": "IPv4", 00:19:30.825 "traddr": "10.0.0.2", 00:19:30.825 "trsvcid": "4420" 00:19:30.825 }, 00:19:30.825 "peer_address": { 00:19:30.825 "trtype": "TCP", 00:19:30.825 "adrfam": "IPv4", 00:19:30.825 "traddr": "10.0.0.1", 00:19:30.825 "trsvcid": "35282" 00:19:30.825 }, 00:19:30.825 "auth": { 00:19:30.825 "state": "completed", 00:19:30.825 "digest": "sha512", 00:19:30.825 "dhgroup": "ffdhe2048" 00:19:30.825 } 00:19:30.825 } 00:19:30.825 ]' 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.825 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.085 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:31.655 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.915 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.175 00:19:32.175 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.175 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.175 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.434 { 00:19:32.434 "cntlid": 113, 00:19:32.434 "qid": 0, 00:19:32.434 "state": "enabled", 00:19:32.434 "thread": "nvmf_tgt_poll_group_000", 00:19:32.434 "listen_address": { 00:19:32.434 "trtype": "TCP", 00:19:32.434 "adrfam": "IPv4", 00:19:32.434 "traddr": "10.0.0.2", 00:19:32.434 "trsvcid": "4420" 00:19:32.434 }, 00:19:32.434 "peer_address": { 00:19:32.434 "trtype": "TCP", 00:19:32.434 "adrfam": "IPv4", 00:19:32.434 "traddr": "10.0.0.1", 00:19:32.434 "trsvcid": "35306" 00:19:32.434 }, 00:19:32.434 "auth": { 00:19:32.434 "state": "completed", 00:19:32.434 "digest": "sha512", 00:19:32.434 "dhgroup": "ffdhe3072" 00:19:32.434 } 00:19:32.434 } 00:19:32.434 ]' 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:32.434 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.434 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.434 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.434 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.694 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.266 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.527 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.787 00:19:33.787 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.788 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.788 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.048 { 00:19:34.048 "cntlid": 115, 00:19:34.048 "qid": 0, 00:19:34.048 "state": "enabled", 00:19:34.048 "thread": "nvmf_tgt_poll_group_000", 00:19:34.048 "listen_address": { 00:19:34.048 "trtype": "TCP", 00:19:34.048 "adrfam": "IPv4", 00:19:34.048 "traddr": "10.0.0.2", 00:19:34.048 "trsvcid": "4420" 00:19:34.048 }, 00:19:34.048 "peer_address": { 00:19:34.048 "trtype": "TCP", 00:19:34.048 "adrfam": "IPv4", 00:19:34.048 "traddr": "10.0.0.1", 00:19:34.048 "trsvcid": "35334" 00:19:34.048 }, 00:19:34.048 "auth": { 00:19:34.048 "state": "completed", 00:19:34.048 "digest": "sha512", 00:19:34.048 "dhgroup": "ffdhe3072" 00:19:34.048 } 00:19:34.048 } 00:19:34.048 ]' 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.048 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.308 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.878 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.138 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.399 00:19:35.399 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.399 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.399 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.659 { 00:19:35.659 "cntlid": 117, 00:19:35.659 "qid": 0, 00:19:35.659 "state": "enabled", 00:19:35.659 "thread": "nvmf_tgt_poll_group_000", 00:19:35.659 "listen_address": { 00:19:35.659 "trtype": "TCP", 00:19:35.659 "adrfam": "IPv4", 00:19:35.659 "traddr": "10.0.0.2", 00:19:35.659 "trsvcid": "4420" 00:19:35.659 }, 00:19:35.659 "peer_address": { 00:19:35.659 "trtype": "TCP", 00:19:35.659 "adrfam": "IPv4", 00:19:35.659 "traddr": "10.0.0.1", 00:19:35.659 "trsvcid": "35354" 00:19:35.659 }, 00:19:35.659 "auth": { 00:19:35.659 "state": "completed", 00:19:35.659 "digest": "sha512", 00:19:35.659 "dhgroup": "ffdhe3072" 00:19:35.659 } 00:19:35.659 } 00:19:35.659 ]' 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.659 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.967 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.607 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.868 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.868 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.129 { 00:19:37.129 "cntlid": 119, 00:19:37.129 "qid": 0, 00:19:37.129 "state": "enabled", 00:19:37.129 "thread": "nvmf_tgt_poll_group_000", 00:19:37.129 "listen_address": { 00:19:37.129 "trtype": "TCP", 00:19:37.129 "adrfam": "IPv4", 00:19:37.129 "traddr": "10.0.0.2", 00:19:37.129 "trsvcid": "4420" 00:19:37.129 }, 00:19:37.129 "peer_address": { 00:19:37.129 "trtype": "TCP", 00:19:37.129 "adrfam": "IPv4", 00:19:37.129 "traddr": "10.0.0.1", 00:19:37.129 "trsvcid": "35388" 00:19:37.129 }, 00:19:37.129 "auth": { 00:19:37.129 "state": "completed", 00:19:37.129 "digest": "sha512", 00:19:37.129 "dhgroup": "ffdhe3072" 00:19:37.129 } 00:19:37.129 } 00:19:37.129 ]' 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.129 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.389 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.389 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.389 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.389 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.389 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.389 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.328 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.587 00:19:38.587 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.587 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.588 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.848 { 00:19:38.848 "cntlid": 121, 00:19:38.848 "qid": 0, 00:19:38.848 "state": "enabled", 00:19:38.848 "thread": "nvmf_tgt_poll_group_000", 00:19:38.848 "listen_address": { 00:19:38.848 "trtype": "TCP", 00:19:38.848 "adrfam": "IPv4", 00:19:38.848 "traddr": "10.0.0.2", 00:19:38.848 "trsvcid": "4420" 00:19:38.848 }, 00:19:38.848 "peer_address": { 00:19:38.848 "trtype": "TCP", 00:19:38.848 "adrfam": "IPv4", 00:19:38.848 "traddr": "10.0.0.1", 00:19:38.848 "trsvcid": "35418" 00:19:38.848 }, 00:19:38.848 "auth": { 00:19:38.848 "state": "completed", 00:19:38.848 "digest": "sha512", 00:19:38.848 "dhgroup": "ffdhe4096" 00:19:38.848 } 00:19:38.848 } 00:19:38.848 ]' 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.848 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.107 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.047 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.307 00:19:40.307 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.307 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.307 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.567 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.567 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.567 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.567 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.567 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.567 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.567 { 00:19:40.567 "cntlid": 123, 00:19:40.567 "qid": 0, 00:19:40.567 "state": "enabled", 00:19:40.567 "thread": "nvmf_tgt_poll_group_000", 00:19:40.567 "listen_address": { 00:19:40.567 "trtype": "TCP", 00:19:40.567 "adrfam": "IPv4", 00:19:40.567 "traddr": "10.0.0.2", 00:19:40.567 "trsvcid": "4420" 00:19:40.567 }, 00:19:40.567 "peer_address": { 00:19:40.567 "trtype": "TCP", 00:19:40.567 "adrfam": "IPv4", 00:19:40.567 "traddr": "10.0.0.1", 00:19:40.567 "trsvcid": "46352" 00:19:40.567 }, 00:19:40.567 "auth": { 00:19:40.567 "state": "completed", 00:19:40.567 "digest": "sha512", 00:19:40.567 "dhgroup": "ffdhe4096" 00:19:40.567 } 00:19:40.567 } 00:19:40.567 ]' 00:19:40.567 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.567 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.567 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.567 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.567 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.567 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.567 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.567 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.826 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.396 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.657 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.916 00:19:41.916 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.916 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.916 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.176 { 00:19:42.176 "cntlid": 125, 00:19:42.176 "qid": 0, 00:19:42.176 "state": "enabled", 00:19:42.176 "thread": "nvmf_tgt_poll_group_000", 00:19:42.176 "listen_address": { 00:19:42.176 "trtype": "TCP", 00:19:42.176 "adrfam": "IPv4", 00:19:42.176 "traddr": "10.0.0.2", 00:19:42.176 "trsvcid": "4420" 00:19:42.176 }, 00:19:42.176 "peer_address": { 00:19:42.176 "trtype": "TCP", 00:19:42.176 "adrfam": "IPv4", 00:19:42.176 "traddr": "10.0.0.1", 00:19:42.176 "trsvcid": "46374" 00:19:42.176 }, 00:19:42.176 "auth": { 00:19:42.176 "state": "completed", 00:19:42.176 "digest": "sha512", 00:19:42.176 "dhgroup": "ffdhe4096" 00:19:42.176 } 00:19:42.176 } 00:19:42.176 ]' 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.176 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.472 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.042 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.302 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.561 00:19:43.561 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.561 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.561 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.561 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.561 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.561 00:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.561 00:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.821 { 00:19:43.821 "cntlid": 127, 00:19:43.821 "qid": 0, 00:19:43.821 "state": "enabled", 00:19:43.821 "thread": "nvmf_tgt_poll_group_000", 00:19:43.821 "listen_address": { 00:19:43.821 "trtype": "TCP", 00:19:43.821 "adrfam": "IPv4", 00:19:43.821 "traddr": "10.0.0.2", 00:19:43.821 "trsvcid": "4420" 00:19:43.821 }, 00:19:43.821 "peer_address": { 00:19:43.821 "trtype": "TCP", 00:19:43.821 "adrfam": "IPv4", 00:19:43.821 "traddr": "10.0.0.1", 00:19:43.821 "trsvcid": "46412" 00:19:43.821 }, 00:19:43.821 "auth": { 00:19:43.821 "state": "completed", 00:19:43.821 "digest": "sha512", 00:19:43.821 "dhgroup": "ffdhe4096" 00:19:43.821 } 00:19:43.821 } 00:19:43.821 ]' 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.821 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.081 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.651 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.912 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.172 00:19:45.172 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.172 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.172 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.432 { 00:19:45.432 "cntlid": 129, 00:19:45.432 "qid": 0, 00:19:45.432 "state": "enabled", 00:19:45.432 "thread": "nvmf_tgt_poll_group_000", 00:19:45.432 "listen_address": { 00:19:45.432 "trtype": "TCP", 00:19:45.432 "adrfam": "IPv4", 00:19:45.432 "traddr": "10.0.0.2", 00:19:45.432 "trsvcid": "4420" 00:19:45.432 }, 00:19:45.432 "peer_address": { 00:19:45.432 "trtype": "TCP", 00:19:45.432 "adrfam": "IPv4", 00:19:45.432 "traddr": "10.0.0.1", 00:19:45.432 "trsvcid": "46444" 00:19:45.432 }, 00:19:45.432 "auth": { 00:19:45.432 "state": "completed", 00:19:45.432 "digest": "sha512", 00:19:45.432 "dhgroup": "ffdhe6144" 00:19:45.432 } 00:19:45.432 } 00:19:45.432 ]' 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.432 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.433 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.433 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.433 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.433 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.693 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.634 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.634 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.894 00:19:46.894 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.894 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.894 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.156 { 00:19:47.156 "cntlid": 131, 00:19:47.156 "qid": 0, 00:19:47.156 "state": "enabled", 00:19:47.156 "thread": "nvmf_tgt_poll_group_000", 00:19:47.156 "listen_address": { 00:19:47.156 "trtype": "TCP", 00:19:47.156 "adrfam": "IPv4", 00:19:47.156 "traddr": "10.0.0.2", 00:19:47.156 "trsvcid": "4420" 00:19:47.156 }, 00:19:47.156 "peer_address": { 00:19:47.156 "trtype": "TCP", 00:19:47.156 "adrfam": "IPv4", 00:19:47.156 "traddr": "10.0.0.1", 00:19:47.156 "trsvcid": "46486" 00:19:47.156 }, 00:19:47.156 "auth": { 00:19:47.156 "state": "completed", 00:19:47.156 "digest": "sha512", 00:19:47.156 "dhgroup": "ffdhe6144" 00:19:47.156 } 00:19:47.156 } 00:19:47.156 ]' 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.156 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.417 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.357 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.617 00:19:48.617 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.617 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.617 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.877 { 00:19:48.877 "cntlid": 133, 00:19:48.877 "qid": 0, 00:19:48.877 "state": "enabled", 00:19:48.877 "thread": "nvmf_tgt_poll_group_000", 00:19:48.877 "listen_address": { 00:19:48.877 "trtype": "TCP", 00:19:48.877 "adrfam": "IPv4", 00:19:48.877 "traddr": "10.0.0.2", 00:19:48.877 "trsvcid": "4420" 00:19:48.877 }, 00:19:48.877 "peer_address": { 00:19:48.877 "trtype": "TCP", 00:19:48.877 "adrfam": "IPv4", 00:19:48.877 "traddr": "10.0.0.1", 00:19:48.877 "trsvcid": "46500" 00:19:48.877 }, 00:19:48.877 "auth": { 00:19:48.877 "state": "completed", 00:19:48.877 "digest": "sha512", 00:19:48.877 "dhgroup": "ffdhe6144" 00:19:48.877 } 00:19:48.877 } 00:19:48.877 ]' 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.877 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.137 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.078 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.338 00:19:50.338 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.338 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.338 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.598 { 00:19:50.598 "cntlid": 135, 00:19:50.598 "qid": 0, 00:19:50.598 "state": "enabled", 00:19:50.598 "thread": "nvmf_tgt_poll_group_000", 00:19:50.598 "listen_address": { 00:19:50.598 "trtype": "TCP", 00:19:50.598 "adrfam": "IPv4", 00:19:50.598 "traddr": "10.0.0.2", 00:19:50.598 "trsvcid": "4420" 00:19:50.598 }, 00:19:50.598 "peer_address": { 00:19:50.598 "trtype": "TCP", 00:19:50.598 "adrfam": "IPv4", 00:19:50.598 "traddr": "10.0.0.1", 00:19:50.598 "trsvcid": "59098" 00:19:50.598 }, 00:19:50.598 "auth": { 00:19:50.598 "state": "completed", 00:19:50.598 "digest": "sha512", 00:19:50.598 "dhgroup": "ffdhe6144" 00:19:50.598 } 00:19:50.598 } 00:19:50.598 ]' 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.598 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.858 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.798 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.368 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.368 { 00:19:52.368 "cntlid": 137, 00:19:52.368 "qid": 0, 00:19:52.368 "state": "enabled", 00:19:52.368 "thread": "nvmf_tgt_poll_group_000", 00:19:52.368 "listen_address": { 00:19:52.368 "trtype": "TCP", 00:19:52.368 "adrfam": "IPv4", 00:19:52.368 "traddr": "10.0.0.2", 00:19:52.368 "trsvcid": "4420" 00:19:52.368 }, 00:19:52.368 "peer_address": { 00:19:52.368 "trtype": "TCP", 00:19:52.368 "adrfam": "IPv4", 00:19:52.368 "traddr": "10.0.0.1", 00:19:52.368 "trsvcid": "59122" 00:19:52.368 }, 00:19:52.368 "auth": { 00:19:52.368 "state": "completed", 00:19:52.368 "digest": "sha512", 00:19:52.368 "dhgroup": "ffdhe8192" 00:19:52.368 } 00:19:52.368 } 00:19:52.368 ]' 00:19:52.368 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.628 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.628 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.628 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.628 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.628 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.628 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.628 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.888 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.459 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.720 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.289 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.289 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.289 { 00:19:54.289 "cntlid": 139, 00:19:54.289 "qid": 0, 00:19:54.289 "state": "enabled", 00:19:54.289 "thread": "nvmf_tgt_poll_group_000", 00:19:54.289 "listen_address": { 00:19:54.289 "trtype": "TCP", 00:19:54.289 "adrfam": "IPv4", 00:19:54.289 "traddr": "10.0.0.2", 00:19:54.289 "trsvcid": "4420" 00:19:54.289 }, 00:19:54.289 "peer_address": { 00:19:54.289 "trtype": "TCP", 00:19:54.289 "adrfam": "IPv4", 00:19:54.289 "traddr": "10.0.0.1", 00:19:54.289 "trsvcid": "59148" 00:19:54.289 }, 00:19:54.289 "auth": { 00:19:54.289 "state": "completed", 00:19:54.289 "digest": "sha512", 00:19:54.289 "dhgroup": "ffdhe8192" 00:19:54.290 } 00:19:54.290 } 00:19:54.290 ]' 00:19:54.290 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.290 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.290 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.550 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.550 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.550 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.550 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.550 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.550 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTViNzYyYzZjODlkNDY0ZDlhN2U5ODhhYzFmNDY1ZDGHpHO7: --dhchap-ctrl-secret DHHC-1:02:MTkyNjg1YzFlNjc5ODczM2I2ZWEyNzAyMWMxNTY4MDRlMWVkOTBlNTQ2NzlkMmYwkLErHQ==: 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.491 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.491 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.061 00:19:56.061 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.061 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.061 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.321 { 00:19:56.321 "cntlid": 141, 00:19:56.321 "qid": 0, 00:19:56.321 "state": "enabled", 00:19:56.321 "thread": "nvmf_tgt_poll_group_000", 00:19:56.321 "listen_address": { 00:19:56.321 "trtype": "TCP", 00:19:56.321 "adrfam": "IPv4", 00:19:56.321 "traddr": "10.0.0.2", 00:19:56.321 "trsvcid": "4420" 00:19:56.321 }, 00:19:56.321 "peer_address": { 00:19:56.321 "trtype": "TCP", 00:19:56.321 "adrfam": "IPv4", 00:19:56.321 "traddr": "10.0.0.1", 00:19:56.321 "trsvcid": "59162" 00:19:56.321 }, 00:19:56.321 "auth": { 00:19:56.321 "state": "completed", 00:19:56.321 "digest": "sha512", 00:19:56.321 "dhgroup": "ffdhe8192" 00:19:56.321 } 00:19:56.321 } 00:19:56.321 ]' 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.321 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YWRjZDk0M2MyYjExM2ViOTY3NjBkMTdjYjY4MDkzZGMzYTgwNGVhNTI3NDIyOGJlJ3qj4Q==: --dhchap-ctrl-secret DHHC-1:01:YjE3YjllMzFiMzliMzU0MWZkMmRmOTYwNmY3ZTFjYTSvUPjA: 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.520 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.089 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.089 { 00:19:58.089 "cntlid": 143, 00:19:58.089 "qid": 0, 00:19:58.089 "state": "enabled", 00:19:58.089 "thread": "nvmf_tgt_poll_group_000", 00:19:58.089 "listen_address": { 00:19:58.089 "trtype": "TCP", 00:19:58.089 "adrfam": "IPv4", 00:19:58.089 "traddr": "10.0.0.2", 00:19:58.089 "trsvcid": "4420" 00:19:58.089 }, 00:19:58.089 "peer_address": { 00:19:58.089 "trtype": "TCP", 00:19:58.089 "adrfam": "IPv4", 00:19:58.089 "traddr": "10.0.0.1", 00:19:58.089 "trsvcid": "59200" 00:19:58.089 }, 00:19:58.089 "auth": { 00:19:58.089 "state": "completed", 00:19:58.089 "digest": "sha512", 00:19:58.089 "dhgroup": "ffdhe8192" 00:19:58.089 } 00:19:58.089 } 00:19:58.089 ]' 00:19:58.089 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.349 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.349 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.349 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.349 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.349 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.349 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.349 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.608 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:59.177 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:59.178 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.438 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.010 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.010 { 00:20:00.010 "cntlid": 145, 00:20:00.010 "qid": 0, 00:20:00.010 "state": "enabled", 00:20:00.010 "thread": "nvmf_tgt_poll_group_000", 00:20:00.010 "listen_address": { 00:20:00.010 "trtype": "TCP", 00:20:00.010 "adrfam": "IPv4", 00:20:00.010 "traddr": "10.0.0.2", 00:20:00.010 "trsvcid": "4420" 00:20:00.010 }, 00:20:00.010 "peer_address": { 00:20:00.010 "trtype": "TCP", 00:20:00.010 "adrfam": "IPv4", 00:20:00.010 "traddr": "10.0.0.1", 00:20:00.010 "trsvcid": "59210" 00:20:00.010 }, 00:20:00.010 "auth": { 00:20:00.010 "state": "completed", 00:20:00.010 "digest": "sha512", 00:20:00.010 "dhgroup": "ffdhe8192" 00:20:00.010 } 00:20:00.010 } 00:20:00.010 ]' 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.010 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.270 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.270 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.270 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.270 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.270 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.270 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGFjMzI5YzU4YjFlMDQ3MWU0MWRhMTg0YTViYTg4Nzc5MWM3ZmUwNWNlNzFkM2Iy/Nxh7w==: --dhchap-ctrl-secret DHHC-1:03:ZWQ2MDgxY2QwZmE2OGIxOWE4ZjMyMWY5MzcxNWViMzg2NDJkNjcxYzM1ZTk3NGJiZjk3MWQ3MGFhNWIxZTg1Met+izI=: 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:01.214 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:01.474 request: 00:20:01.474 { 00:20:01.474 "name": "nvme0", 00:20:01.474 "trtype": "tcp", 00:20:01.474 "traddr": "10.0.0.2", 00:20:01.474 "adrfam": "ipv4", 00:20:01.474 "trsvcid": "4420", 00:20:01.474 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:01.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:01.474 "prchk_reftag": false, 00:20:01.474 "prchk_guard": false, 00:20:01.474 "hdgst": false, 00:20:01.474 "ddgst": false, 00:20:01.474 "dhchap_key": "key2", 00:20:01.474 "method": "bdev_nvme_attach_controller", 00:20:01.474 "req_id": 1 00:20:01.474 } 00:20:01.474 Got JSON-RPC error response 00:20:01.474 response: 00:20:01.474 { 00:20:01.474 "code": -5, 00:20:01.474 "message": "Input/output error" 00:20:01.474 } 00:20:01.474 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:01.475 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:02.046 request: 00:20:02.046 { 00:20:02.046 "name": "nvme0", 00:20:02.046 "trtype": "tcp", 00:20:02.046 "traddr": "10.0.0.2", 00:20:02.046 "adrfam": "ipv4", 00:20:02.046 "trsvcid": "4420", 00:20:02.046 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:02.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:02.046 "prchk_reftag": false, 00:20:02.046 "prchk_guard": false, 00:20:02.046 "hdgst": false, 00:20:02.046 "ddgst": false, 00:20:02.046 "dhchap_key": "key1", 00:20:02.046 "dhchap_ctrlr_key": "ckey2", 00:20:02.046 "method": "bdev_nvme_attach_controller", 00:20:02.046 "req_id": 1 00:20:02.046 } 00:20:02.046 Got JSON-RPC error response 00:20:02.046 response: 00:20:02.046 { 00:20:02.046 "code": -5, 00:20:02.046 "message": "Input/output error" 00:20:02.046 } 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.046 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.618 request: 00:20:02.618 { 00:20:02.618 "name": "nvme0", 00:20:02.618 "trtype": "tcp", 00:20:02.618 "traddr": "10.0.0.2", 00:20:02.618 "adrfam": "ipv4", 00:20:02.618 "trsvcid": "4420", 00:20:02.618 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:02.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:02.618 "prchk_reftag": false, 00:20:02.618 "prchk_guard": false, 00:20:02.618 "hdgst": false, 00:20:02.618 "ddgst": false, 00:20:02.618 "dhchap_key": "key1", 00:20:02.618 "dhchap_ctrlr_key": "ckey1", 00:20:02.618 "method": "bdev_nvme_attach_controller", 00:20:02.618 "req_id": 1 00:20:02.618 } 00:20:02.618 Got JSON-RPC error response 00:20:02.618 response: 00:20:02.618 { 00:20:02.618 "code": -5, 00:20:02.618 "message": "Input/output error" 00:20:02.618 } 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1086033 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1086033 ']' 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1086033 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086033 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086033' 00:20:02.618 killing process with pid 1086033 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1086033 00:20:02.618 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1086033 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1111652 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1111652 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1111652 ']' 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.879 00:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:03.451 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.451 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:03.451 00:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.451 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.451 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1111652 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1111652 ']' 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.730 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.026 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.287 00:20:04.287 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.287 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.287 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.547 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.547 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.547 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.547 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.547 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.547 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.547 { 00:20:04.547 "cntlid": 1, 00:20:04.547 "qid": 0, 00:20:04.547 "state": "enabled", 00:20:04.547 "thread": "nvmf_tgt_poll_group_000", 00:20:04.547 "listen_address": { 00:20:04.547 "trtype": "TCP", 00:20:04.547 "adrfam": "IPv4", 00:20:04.547 "traddr": "10.0.0.2", 00:20:04.547 "trsvcid": "4420" 00:20:04.547 }, 00:20:04.547 "peer_address": { 00:20:04.548 "trtype": "TCP", 00:20:04.548 "adrfam": "IPv4", 00:20:04.548 "traddr": "10.0.0.1", 00:20:04.548 "trsvcid": "50476" 00:20:04.548 }, 00:20:04.548 "auth": { 00:20:04.548 "state": "completed", 00:20:04.548 "digest": "sha512", 00:20:04.548 "dhgroup": "ffdhe8192" 00:20:04.548 } 00:20:04.548 } 00:20:04.548 ]' 00:20:04.548 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.548 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.548 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.548 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.548 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.808 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.808 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.808 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.808 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:ODM4MDFjNzQyZmFmMDdkYWJmYjRmNzViMTk4Njk5YzBlMGQ4NWY3MWJkZjE5MDc4NjA3NmUwNTMyNDgyMTA4NTCkPWw=: 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.750 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.010 request: 00:20:06.010 { 00:20:06.010 "name": "nvme0", 00:20:06.010 "trtype": "tcp", 00:20:06.010 "traddr": "10.0.0.2", 00:20:06.010 "adrfam": "ipv4", 00:20:06.010 "trsvcid": "4420", 00:20:06.010 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:06.010 "prchk_reftag": false, 00:20:06.010 "prchk_guard": false, 00:20:06.010 "hdgst": false, 00:20:06.010 "ddgst": false, 00:20:06.010 "dhchap_key": "key3", 00:20:06.010 "method": "bdev_nvme_attach_controller", 00:20:06.010 "req_id": 1 00:20:06.010 } 00:20:06.010 Got JSON-RPC error response 00:20:06.010 response: 00:20:06.010 { 00:20:06.010 "code": -5, 00:20:06.010 "message": "Input/output error" 00:20:06.010 } 00:20:06.010 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:06.010 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.010 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.010 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.010 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:06.010 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:06.010 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.011 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.272 request: 00:20:06.272 { 00:20:06.272 "name": "nvme0", 00:20:06.272 "trtype": "tcp", 00:20:06.272 "traddr": "10.0.0.2", 00:20:06.272 "adrfam": "ipv4", 00:20:06.272 "trsvcid": "4420", 00:20:06.272 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:06.272 "prchk_reftag": false, 00:20:06.272 "prchk_guard": false, 00:20:06.272 "hdgst": false, 00:20:06.272 "ddgst": false, 00:20:06.272 "dhchap_key": "key3", 00:20:06.272 "method": "bdev_nvme_attach_controller", 00:20:06.272 "req_id": 1 00:20:06.272 } 00:20:06.272 Got JSON-RPC error response 00:20:06.272 response: 00:20:06.272 { 00:20:06.272 "code": -5, 00:20:06.272 "message": "Input/output error" 00:20:06.272 } 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:06.272 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:06.273 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:06.273 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:06.273 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.533 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:06.533 request: 00:20:06.533 { 00:20:06.533 "name": "nvme0", 00:20:06.533 "trtype": "tcp", 00:20:06.533 "traddr": "10.0.0.2", 00:20:06.533 "adrfam": "ipv4", 00:20:06.533 "trsvcid": "4420", 00:20:06.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:06.533 "prchk_reftag": false, 00:20:06.533 "prchk_guard": false, 00:20:06.533 "hdgst": false, 00:20:06.533 "ddgst": false, 00:20:06.533 "dhchap_key": "key0", 00:20:06.533 "dhchap_ctrlr_key": "key1", 00:20:06.533 "method": "bdev_nvme_attach_controller", 00:20:06.533 "req_id": 1 00:20:06.533 } 00:20:06.533 Got JSON-RPC error response 00:20:06.533 response: 00:20:06.533 { 00:20:06.533 "code": -5, 00:20:06.533 "message": "Input/output error" 00:20:06.533 } 00:20:06.533 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:06.533 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.533 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.533 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.533 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:06.533 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:06.793 00:20:06.793 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:06.793 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:06.793 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.054 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.054 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.054 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1086064 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1086064 ']' 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1086064 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.055 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086064 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086064' 00:20:07.316 killing process with pid 1086064 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1086064 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1086064 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.316 rmmod nvme_tcp 00:20:07.316 rmmod nvme_fabrics 00:20:07.316 rmmod nvme_keyring 00:20:07.316 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1111652 ']' 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1111652 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1111652 ']' 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1111652 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1111652 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:07.576 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1111652' 00:20:07.576 killing process with pid 1111652 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1111652 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1111652 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.576 00:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.120 00:31:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.120 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.k9k /tmp/spdk.key-sha256.vnc /tmp/spdk.key-sha384.w1k /tmp/spdk.key-sha512.W11 /tmp/spdk.key-sha512.UCU /tmp/spdk.key-sha384.DsZ /tmp/spdk.key-sha256.Gck '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:10.120 00:20:10.120 real 2m19.270s 00:20:10.120 user 5m8.768s 00:20:10.120 sys 0m19.729s 00:20:10.120 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.120 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.120 ************************************ 00:20:10.120 END TEST nvmf_auth_target 00:20:10.120 ************************************ 00:20:10.120 00:31:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.120 00:31:23 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:10.120 00:31:23 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:10.120 00:31:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:10.120 00:31:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.120 00:31:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.120 ************************************ 00:20:10.120 START TEST nvmf_bdevio_no_huge 00:20:10.120 ************************************ 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:10.120 * Looking for test storage... 00:20:10.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.120 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.121 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.121 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.121 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.121 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.121 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.121 00:31:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:18.259 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:18.259 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:18.259 Found net devices under 0000:31:00.0: cvl_0_0 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:18.259 Found net devices under 0000:31:00.1: cvl_0_1 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:20:18.259 00:20:18.259 --- 10.0.0.2 ping statistics --- 00:20:18.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.259 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:20:18.259 00:20:18.259 --- 10.0.0.1 ping statistics --- 00:20:18.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.259 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.259 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1117380 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1117380 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1117380 ']' 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.260 00:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.260 [2024-07-16 00:31:31.511863] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:18.260 [2024-07-16 00:31:31.511929] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:18.260 [2024-07-16 00:31:31.614619] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.260 [2024-07-16 00:31:31.721917] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.260 [2024-07-16 00:31:31.721971] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.260 [2024-07-16 00:31:31.721980] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.260 [2024-07-16 00:31:31.721987] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.260 [2024-07-16 00:31:31.721993] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.260 [2024-07-16 00:31:31.722656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:18.260 [2024-07-16 00:31:31.722872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:18.260 [2024-07-16 00:31:31.723247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.260 [2024-07-16 00:31:31.723257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.832 [2024-07-16 00:31:32.367273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.832 Malloc0 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.832 [2024-07-16 00:31:32.420939] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.832 { 00:20:18.832 "params": { 00:20:18.832 "name": "Nvme$subsystem", 00:20:18.832 "trtype": "$TEST_TRANSPORT", 00:20:18.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.832 "adrfam": "ipv4", 00:20:18.832 "trsvcid": "$NVMF_PORT", 00:20:18.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.832 "hdgst": ${hdgst:-false}, 00:20:18.832 "ddgst": ${ddgst:-false} 00:20:18.832 }, 00:20:18.832 "method": "bdev_nvme_attach_controller" 00:20:18.832 } 00:20:18.832 EOF 00:20:18.832 )") 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:18.832 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:18.833 00:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.833 "params": { 00:20:18.833 "name": "Nvme1", 00:20:18.833 "trtype": "tcp", 00:20:18.833 "traddr": "10.0.0.2", 00:20:18.833 "adrfam": "ipv4", 00:20:18.833 "trsvcid": "4420", 00:20:18.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.833 "hdgst": false, 00:20:18.833 "ddgst": false 00:20:18.833 }, 00:20:18.833 "method": "bdev_nvme_attach_controller" 00:20:18.833 }' 00:20:19.093 [2024-07-16 00:31:32.488227] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:19.093 [2024-07-16 00:31:32.488308] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1117421 ] 00:20:19.093 [2024-07-16 00:31:32.567397] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.093 [2024-07-16 00:31:32.663980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.093 [2024-07-16 00:31:32.664096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.093 [2024-07-16 00:31:32.664099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.663 I/O targets: 00:20:19.663 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:19.663 00:20:19.663 00:20:19.663 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.663 http://cunit.sourceforge.net/ 00:20:19.663 00:20:19.663 00:20:19.663 Suite: bdevio tests on: Nvme1n1 00:20:19.663 Test: blockdev write read block ...passed 00:20:19.663 Test: blockdev write zeroes read block ...passed 00:20:19.663 Test: blockdev write zeroes read no split ...passed 00:20:19.663 Test: blockdev write zeroes read split ...passed 00:20:19.663 Test: blockdev write zeroes read split partial ...passed 00:20:19.663 Test: blockdev reset ...[2024-07-16 00:31:33.236524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.663 [2024-07-16 00:31:33.236589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160fb10 (9): Bad file descriptor 00:20:19.663 [2024-07-16 00:31:33.250590] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.663 passed 00:20:19.663 Test: blockdev write read 8 blocks ...passed 00:20:19.663 Test: blockdev write read size > 128k ...passed 00:20:19.663 Test: blockdev write read invalid size ...passed 00:20:19.922 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.922 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.922 Test: blockdev write read max offset ...passed 00:20:19.922 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.922 Test: blockdev writev readv 8 blocks ...passed 00:20:19.922 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.922 Test: blockdev writev readv block ...passed 00:20:19.922 Test: blockdev writev readv size > 128k ...passed 00:20:19.922 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.922 Test: blockdev comparev and writev ...[2024-07-16 00:31:33.476087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.476111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.923 [2024-07-16 00:31:33.476121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.476127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.923 [2024-07-16 00:31:33.476676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.476685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:19.923 [2024-07-16 00:31:33.476694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.476699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:19.923 [2024-07-16 00:31:33.477240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.477248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:19.923 [2024-07-16 00:31:33.477257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.477262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.923 [2024-07-16 00:31:33.477734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.477741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:19.923 [2024-07-16 00:31:33.477750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.923 [2024-07-16 00:31:33.477756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:19.923 passed 00:20:20.182 Test: blockdev nvme passthru rw ...passed 00:20:20.182 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:31:33.564268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.182 [2024-07-16 00:31:33.564278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:20.182 [2024-07-16 00:31:33.564561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.182 [2024-07-16 00:31:33.564568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:20.183 [2024-07-16 00:31:33.564894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.183 [2024-07-16 00:31:33.564900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:20.183 [2024-07-16 00:31:33.565208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.183 [2024-07-16 00:31:33.565218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:20.183 passed 00:20:20.183 Test: blockdev nvme admin passthru ...passed 00:20:20.183 Test: blockdev copy ...passed 00:20:20.183 00:20:20.183 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.183 suites 1 1 n/a 0 0 00:20:20.183 tests 23 23 23 0 0 00:20:20.183 asserts 152 152 152 0 n/a 00:20:20.183 00:20:20.183 Elapsed time = 1.223 seconds 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.443 rmmod nvme_tcp 00:20:20.443 rmmod nvme_fabrics 00:20:20.443 rmmod nvme_keyring 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1117380 ']' 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1117380 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1117380 ']' 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1117380 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.443 00:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117380 00:20:20.443 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:20.443 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:20.443 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117380' 00:20:20.443 killing process with pid 1117380 00:20:20.443 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1117380 00:20:20.443 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1117380 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.014 00:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.926 00:31:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.926 00:20:22.926 real 0m13.172s 00:20:22.926 user 0m15.174s 00:20:22.926 sys 0m6.943s 00:20:22.926 00:31:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:22.926 00:31:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.926 ************************************ 00:20:22.926 END TEST nvmf_bdevio_no_huge 00:20:22.926 ************************************ 00:20:22.926 00:31:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:22.926 00:31:36 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:22.926 00:31:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:22.926 00:31:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.926 00:31:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:22.926 ************************************ 00:20:22.926 START TEST nvmf_tls 00:20:22.926 ************************************ 00:20:22.926 00:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:23.186 * Looking for test storage... 00:20:23.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:23.186 00:31:36 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.187 00:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:31.328 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:31.328 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:31.328 Found net devices under 0000:31:00.0: cvl_0_0 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:31.328 Found net devices under 0000:31:00.1: cvl_0_1 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:31.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:20:31.328 00:20:31.328 --- 10.0.0.2 ping statistics --- 00:20:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.328 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:20:31.328 00:20:31.328 --- 10.0.0.1 ping statistics --- 00:20:31.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.328 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1122430 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1122430 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1122430 ']' 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.328 00:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.328 [2024-07-16 00:31:44.839830] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:31.328 [2024-07-16 00:31:44.839894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.329 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.329 [2024-07-16 00:31:44.907959] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.590 [2024-07-16 00:31:44.978376] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.590 [2024-07-16 00:31:44.978423] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.590 [2024-07-16 00:31:44.978431] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.590 [2024-07-16 00:31:44.978437] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.590 [2024-07-16 00:31:44.978442] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.590 [2024-07-16 00:31:44.978472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:31.590 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:31.851 true 00:20:31.851 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.851 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:31.851 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:31.851 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:31.851 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:32.112 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.112 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:32.373 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:32.373 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:32.373 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:32.373 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.373 00:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:32.634 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:32.634 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:32.634 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.634 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:32.634 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:32.634 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:32.634 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:32.895 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.895 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:33.156 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:33.156 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:33.156 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:33.156 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:33.156 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:33.417 00:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.f05TXlepel 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.3ffqrNdOsH 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.f05TXlepel 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3ffqrNdOsH 00:20:33.417 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:33.678 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:33.939 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.f05TXlepel 00:20:33.939 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.f05TXlepel 00:20:33.939 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.939 [2024-07-16 00:31:47.569378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.199 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.199 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.459 [2024-07-16 00:31:47.886120] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.459 [2024-07-16 00:31:47.886298] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.459 00:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.459 malloc0 00:20:34.459 00:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.719 00:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.f05TXlepel 00:20:34.719 [2024-07-16 00:31:48.337171] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:34.980 00:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.f05TXlepel 00:20:34.980 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.982 Initializing NVMe Controllers 00:20:44.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:44.982 Initialization complete. Launching workers. 00:20:44.982 ======================================================== 00:20:44.982 Latency(us) 00:20:44.982 Device Information : IOPS MiB/s Average min max 00:20:44.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18957.58 74.05 3376.00 1124.82 5242.77 00:20:44.982 ======================================================== 00:20:44.982 Total : 18957.58 74.05 3376.00 1124.82 5242.77 00:20:44.982 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f05TXlepel 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.f05TXlepel' 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1125152 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1125152 /var/tmp/bdevperf.sock 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1125152 ']' 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.982 00:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.982 [2024-07-16 00:31:58.512489] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:44.982 [2024-07-16 00:31:58.512547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125152 ] 00:20:44.982 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.982 [2024-07-16 00:31:58.568709] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.246 [2024-07-16 00:31:58.621247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.818 00:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.818 00:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:45.818 00:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.f05TXlepel 00:20:45.818 [2024-07-16 00:31:59.394102] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.818 [2024-07-16 00:31:59.394157] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.079 TLSTESTn1 00:20:46.079 00:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:46.079 Running I/O for 10 seconds... 00:20:56.141 00:20:56.141 Latency(us) 00:20:56.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.141 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:56.141 Verification LBA range: start 0x0 length 0x2000 00:20:56.141 TLSTESTn1 : 10.07 3512.30 13.72 0.00 0.00 36333.27 5543.25 94808.75 00:20:56.141 =================================================================================================================== 00:20:56.141 Total : 3512.30 13.72 0.00 0.00 36333.27 5543.25 94808.75 00:20:56.141 0 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1125152 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1125152 ']' 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1125152 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125152 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125152' 00:20:56.141 killing process with pid 1125152 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1125152 00:20:56.141 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.141 00:20:56.141 Latency(us) 00:20:56.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.141 =================================================================================================================== 00:20:56.141 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.141 [2024-07-16 00:32:09.746144] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:56.141 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1125152 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3ffqrNdOsH 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3ffqrNdOsH 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3ffqrNdOsH 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3ffqrNdOsH' 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1127388 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1127388 /var/tmp/bdevperf.sock 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127388 ']' 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.408 00:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.408 [2024-07-16 00:32:09.912870] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:56.408 [2024-07-16 00:32:09.912929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127388 ] 00:20:56.408 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.408 [2024-07-16 00:32:09.967664] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.408 [2024-07-16 00:32:10.021786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3ffqrNdOsH 00:20:57.350 [2024-07-16 00:32:10.832778] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.350 [2024-07-16 00:32:10.832835] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:57.350 [2024-07-16 00:32:10.837481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:57.350 [2024-07-16 00:32:10.838080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de6eb0 (107): Transport endpoint is not connected 00:20:57.350 [2024-07-16 00:32:10.839075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de6eb0 (9): Bad file descriptor 00:20:57.350 [2024-07-16 00:32:10.840077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.350 [2024-07-16 00:32:10.840084] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:57.350 [2024-07-16 00:32:10.840089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.350 request: 00:20:57.350 { 00:20:57.350 "name": "TLSTEST", 00:20:57.350 "trtype": "tcp", 00:20:57.350 "traddr": "10.0.0.2", 00:20:57.350 "adrfam": "ipv4", 00:20:57.350 "trsvcid": "4420", 00:20:57.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.350 "prchk_reftag": false, 00:20:57.350 "prchk_guard": false, 00:20:57.350 "hdgst": false, 00:20:57.350 "ddgst": false, 00:20:57.350 "psk": "/tmp/tmp.3ffqrNdOsH", 00:20:57.350 "method": "bdev_nvme_attach_controller", 00:20:57.350 "req_id": 1 00:20:57.350 } 00:20:57.350 Got JSON-RPC error response 00:20:57.350 response: 00:20:57.350 { 00:20:57.350 "code": -5, 00:20:57.350 "message": "Input/output error" 00:20:57.350 } 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1127388 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127388 ']' 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127388 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127388 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127388' 00:20:57.350 killing process with pid 1127388 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127388 00:20:57.350 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.350 00:20:57.350 Latency(us) 00:20:57.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.350 =================================================================================================================== 00:20:57.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.350 [2024-07-16 00:32:10.906324] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.350 00:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127388 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.f05TXlepel 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.f05TXlepel 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.f05TXlepel 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.f05TXlepel' 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1127513 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1127513 /var/tmp/bdevperf.sock 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127513 ']' 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.612 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.612 [2024-07-16 00:32:11.063763] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:57.612 [2024-07-16 00:32:11.063822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127513 ] 00:20:57.612 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.612 [2024-07-16 00:32:11.119505] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.612 [2024-07-16 00:32:11.172364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.554 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.554 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.554 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.f05TXlepel 00:20:58.554 [2024-07-16 00:32:11.965305] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.554 [2024-07-16 00:32:11.965358] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.554 [2024-07-16 00:32:11.970051] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:58.554 [2024-07-16 00:32:11.970076] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:58.554 [2024-07-16 00:32:11.970096] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:58.554 [2024-07-16 00:32:11.970730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdeb0 (107): Transport endpoint is not connected 00:20:58.554 [2024-07-16 00:32:11.971725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdeb0 (9): Bad file descriptor 00:20:58.554 [2024-07-16 00:32:11.972730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:58.554 [2024-07-16 00:32:11.972737] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:58.554 [2024-07-16 00:32:11.972742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:58.554 request: 00:20:58.554 { 00:20:58.554 "name": "TLSTEST", 00:20:58.554 "trtype": "tcp", 00:20:58.554 "traddr": "10.0.0.2", 00:20:58.554 "adrfam": "ipv4", 00:20:58.554 "trsvcid": "4420", 00:20:58.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.554 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:58.554 "prchk_reftag": false, 00:20:58.554 "prchk_guard": false, 00:20:58.554 "hdgst": false, 00:20:58.554 "ddgst": false, 00:20:58.554 "psk": "/tmp/tmp.f05TXlepel", 00:20:58.554 "method": "bdev_nvme_attach_controller", 00:20:58.554 "req_id": 1 00:20:58.554 } 00:20:58.554 Got JSON-RPC error response 00:20:58.554 response: 00:20:58.554 { 00:20:58.554 "code": -5, 00:20:58.554 "message": "Input/output error" 00:20:58.554 } 00:20:58.554 00:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1127513 00:20:58.554 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127513 ']' 00:20:58.554 00:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127513 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127513 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127513' 00:20:58.554 killing process with pid 1127513 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127513 00:20:58.554 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.554 00:20:58.554 Latency(us) 00:20:58.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.554 =================================================================================================================== 00:20:58.554 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.554 [2024-07-16 00:32:12.058235] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127513 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.f05TXlepel 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.f05TXlepel 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.f05TXlepel 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.f05TXlepel' 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1127859 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1127859 /var/tmp/bdevperf.sock 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127859 ']' 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.554 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.814 [2024-07-16 00:32:12.223475] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:58.814 [2024-07-16 00:32:12.223533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127859 ] 00:20:58.814 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.814 [2024-07-16 00:32:12.280720] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.814 [2024-07-16 00:32:12.332116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.385 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.385 00:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.385 00:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.f05TXlepel 00:20:59.646 [2024-07-16 00:32:13.125165] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.646 [2024-07-16 00:32:13.125226] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:59.646 [2024-07-16 00:32:13.132475] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:59.646 [2024-07-16 00:32:13.132494] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:59.646 [2024-07-16 00:32:13.132513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:59.646 [2024-07-16 00:32:13.133510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47eb0 (107): Transport endpoint is not connected 00:20:59.646 [2024-07-16 00:32:13.134506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47eb0 (9): Bad file descriptor 00:20:59.646 [2024-07-16 00:32:13.135508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:59.646 [2024-07-16 00:32:13.135518] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:59.646 [2024-07-16 00:32:13.135523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:59.646 request: 00:20:59.646 { 00:20:59.646 "name": "TLSTEST", 00:20:59.646 "trtype": "tcp", 00:20:59.646 "traddr": "10.0.0.2", 00:20:59.646 "adrfam": "ipv4", 00:20:59.646 "trsvcid": "4420", 00:20:59.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.646 "prchk_reftag": false, 00:20:59.646 "prchk_guard": false, 00:20:59.646 "hdgst": false, 00:20:59.646 "ddgst": false, 00:20:59.646 "psk": "/tmp/tmp.f05TXlepel", 00:20:59.646 "method": "bdev_nvme_attach_controller", 00:20:59.646 "req_id": 1 00:20:59.646 } 00:20:59.646 Got JSON-RPC error response 00:20:59.646 response: 00:20:59.646 { 00:20:59.646 "code": -5, 00:20:59.646 "message": "Input/output error" 00:20:59.646 } 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1127859 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127859 ']' 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127859 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127859 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127859' 00:20:59.646 killing process with pid 1127859 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127859 00:20:59.646 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.646 00:20:59.646 Latency(us) 00:20:59.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.646 =================================================================================================================== 00:20:59.646 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.646 [2024-07-16 00:32:13.222427] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:59.646 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127859 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1128103 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1128103 /var/tmp/bdevperf.sock 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128103 ']' 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.907 00:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.907 [2024-07-16 00:32:13.387171] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:59.907 [2024-07-16 00:32:13.387226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128103 ] 00:20:59.907 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.907 [2024-07-16 00:32:13.443686] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.907 [2024-07-16 00:32:13.495455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:00.848 [2024-07-16 00:32:14.295364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:00.848 [2024-07-16 00:32:14.297241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd375b0 (9): Bad file descriptor 00:21:00.848 [2024-07-16 00:32:14.298241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.848 [2024-07-16 00:32:14.298249] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:00.848 [2024-07-16 00:32:14.298254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.848 request: 00:21:00.848 { 00:21:00.848 "name": "TLSTEST", 00:21:00.848 "trtype": "tcp", 00:21:00.848 "traddr": "10.0.0.2", 00:21:00.848 "adrfam": "ipv4", 00:21:00.848 "trsvcid": "4420", 00:21:00.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.848 "prchk_reftag": false, 00:21:00.848 "prchk_guard": false, 00:21:00.848 "hdgst": false, 00:21:00.848 "ddgst": false, 00:21:00.848 "method": "bdev_nvme_attach_controller", 00:21:00.848 "req_id": 1 00:21:00.848 } 00:21:00.848 Got JSON-RPC error response 00:21:00.848 response: 00:21:00.848 { 00:21:00.848 "code": -5, 00:21:00.848 "message": "Input/output error" 00:21:00.848 } 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1128103 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128103 ']' 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128103 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128103 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128103' 00:21:00.848 killing process with pid 1128103 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128103 00:21:00.848 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.848 00:21:00.848 Latency(us) 00:21:00.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.848 =================================================================================================================== 00:21:00.848 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.848 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128103 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1122430 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1122430 ']' 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1122430 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1122430 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1122430' 00:21:01.109 killing process with pid 1122430 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1122430 00:21:01.109 [2024-07-16 00:32:14.540121] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1122430 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.kOCGRAvDPS 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.kOCGRAvDPS 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1128295 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1128295 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128295 ']' 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.109 00:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.370 [2024-07-16 00:32:14.771522] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:01.370 [2024-07-16 00:32:14.771577] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.370 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.370 [2024-07-16 00:32:14.861199] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.370 [2024-07-16 00:32:14.919644] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.370 [2024-07-16 00:32:14.919680] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.370 [2024-07-16 00:32:14.919685] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.370 [2024-07-16 00:32:14.919690] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.370 [2024-07-16 00:32:14.919694] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.370 [2024-07-16 00:32:14.919716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.940 00:32:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.940 00:32:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.940 00:32:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.940 00:32:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.940 00:32:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.201 00:32:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.201 00:32:15 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.kOCGRAvDPS 00:21:02.201 00:32:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kOCGRAvDPS 00:21:02.201 00:32:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:02.201 [2024-07-16 00:32:15.714946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.201 00:32:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.461 00:32:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.461 [2024-07-16 00:32:15.995638] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.461 [2024-07-16 00:32:15.995815] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.461 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.722 malloc0 00:21:02.722 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.722 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kOCGRAvDPS 00:21:02.982 [2024-07-16 00:32:16.466871] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kOCGRAvDPS 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kOCGRAvDPS' 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1128682 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1128682 /var/tmp/bdevperf.sock 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128682 ']' 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.982 00:32:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.982 [2024-07-16 00:32:16.529851] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:02.982 [2024-07-16 00:32:16.529903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128682 ] 00:21:02.982 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.982 [2024-07-16 00:32:16.584773] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.243 [2024-07-16 00:32:16.637353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.813 00:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.813 00:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.813 00:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kOCGRAvDPS 00:21:04.075 [2024-07-16 00:32:17.450528] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.075 [2024-07-16 00:32:17.450582] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:04.075 TLSTESTn1 00:21:04.075 00:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:04.075 Running I/O for 10 seconds... 00:21:16.299 00:21:16.299 Latency(us) 00:21:16.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.299 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.299 Verification LBA range: start 0x0 length 0x2000 00:21:16.299 TLSTESTn1 : 10.06 4856.80 18.97 0.00 0.00 26304.15 6280.53 97430.19 00:21:16.299 =================================================================================================================== 00:21:16.299 Total : 4856.80 18.97 0.00 0.00 26304.15 6280.53 97430.19 00:21:16.299 0 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1128682 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128682 ']' 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128682 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128682 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128682' 00:21:16.299 killing process with pid 1128682 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128682 00:21:16.299 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.299 00:21:16.299 Latency(us) 00:21:16.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.299 =================================================================================================================== 00:21:16.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.299 [2024-07-16 00:32:27.791721] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128682 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.kOCGRAvDPS 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kOCGRAvDPS 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kOCGRAvDPS 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kOCGRAvDPS 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kOCGRAvDPS' 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130927 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130927 /var/tmp/bdevperf.sock 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1130927 ']' 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.299 00:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.299 [2024-07-16 00:32:27.963017] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:16.299 [2024-07-16 00:32:27.963072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130927 ] 00:21:16.300 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.300 [2024-07-16 00:32:28.019195] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.300 [2024-07-16 00:32:28.069929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kOCGRAvDPS 00:21:16.300 [2024-07-16 00:32:28.878846] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.300 [2024-07-16 00:32:28.878888] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:16.300 [2024-07-16 00:32:28.878893] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.kOCGRAvDPS 00:21:16.300 request: 00:21:16.300 { 00:21:16.300 "name": "TLSTEST", 00:21:16.300 "trtype": "tcp", 00:21:16.300 "traddr": "10.0.0.2", 00:21:16.300 "adrfam": "ipv4", 00:21:16.300 "trsvcid": "4420", 00:21:16.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.300 "prchk_reftag": false, 00:21:16.300 "prchk_guard": false, 00:21:16.300 "hdgst": false, 00:21:16.300 "ddgst": false, 00:21:16.300 "psk": "/tmp/tmp.kOCGRAvDPS", 00:21:16.300 "method": "bdev_nvme_attach_controller", 00:21:16.300 "req_id": 1 00:21:16.300 } 00:21:16.300 Got JSON-RPC error response 00:21:16.300 response: 00:21:16.300 { 00:21:16.300 "code": -1, 00:21:16.300 "message": "Operation not permitted" 00:21:16.300 } 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1130927 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1130927 ']' 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1130927 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1130927 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1130927' 00:21:16.300 killing process with pid 1130927 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1130927 00:21:16.300 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.300 00:21:16.300 Latency(us) 00:21:16.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.300 =================================================================================================================== 00:21:16.300 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.300 00:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1130927 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1128295 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128295 ']' 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128295 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128295 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128295' 00:21:16.300 killing process with pid 1128295 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128295 00:21:16.300 [2024-07-16 00:32:29.126508] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128295 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1131269 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1131269 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1131269 ']' 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.300 00:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.300 [2024-07-16 00:32:29.305209] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:16.300 [2024-07-16 00:32:29.305268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.300 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.300 [2024-07-16 00:32:29.395780] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.300 [2024-07-16 00:32:29.448729] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.300 [2024-07-16 00:32:29.448763] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.300 [2024-07-16 00:32:29.448769] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.300 [2024-07-16 00:32:29.448774] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.300 [2024-07-16 00:32:29.448778] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.300 [2024-07-16 00:32:29.448794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.kOCGRAvDPS 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:16.560 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kOCGRAvDPS 00:21:16.561 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:16.561 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.561 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:16.561 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:16.561 00:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.kOCGRAvDPS 00:21:16.561 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kOCGRAvDPS 00:21:16.561 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:16.821 [2024-07-16 00:32:30.266748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.821 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.821 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:17.081 [2024-07-16 00:32:30.579511] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.081 [2024-07-16 00:32:30.579699] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.081 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:17.342 malloc0 00:21:17.342 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:17.342 00:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kOCGRAvDPS 00:21:17.602 [2024-07-16 00:32:31.042710] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:17.603 [2024-07-16 00:32:31.042732] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:17.603 [2024-07-16 00:32:31.042752] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:17.603 request: 00:21:17.603 { 00:21:17.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.603 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.603 "psk": "/tmp/tmp.kOCGRAvDPS", 00:21:17.603 "method": "nvmf_subsystem_add_host", 00:21:17.603 "req_id": 1 00:21:17.603 } 00:21:17.603 Got JSON-RPC error response 00:21:17.603 response: 00:21:17.603 { 00:21:17.603 "code": -32603, 00:21:17.603 "message": "Internal error" 00:21:17.603 } 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1131269 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1131269 ']' 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1131269 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131269 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131269' 00:21:17.603 killing process with pid 1131269 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1131269 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1131269 00:21:17.603 00:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.kOCGRAvDPS 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1131642 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1131642 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1131642 ']' 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.863 00:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.863 [2024-07-16 00:32:31.305124] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:17.863 [2024-07-16 00:32:31.305184] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.863 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.863 [2024-07-16 00:32:31.394465] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.863 [2024-07-16 00:32:31.448718] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.863 [2024-07-16 00:32:31.448751] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.863 [2024-07-16 00:32:31.448757] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.863 [2024-07-16 00:32:31.448761] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.863 [2024-07-16 00:32:31.448766] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.863 [2024-07-16 00:32:31.448779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.434 00:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.434 00:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.434 00:32:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.434 00:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:18.434 00:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.694 00:32:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.694 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.kOCGRAvDPS 00:21:18.694 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kOCGRAvDPS 00:21:18.695 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.695 [2024-07-16 00:32:32.238614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.695 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.955 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.955 [2024-07-16 00:32:32.547369] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.955 [2024-07-16 00:32:32.547547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.955 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:19.215 malloc0 00:21:19.215 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:19.474 00:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kOCGRAvDPS 00:21:19.474 [2024-07-16 00:32:32.986504] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1132004 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1132004 /var/tmp/bdevperf.sock 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1132004 ']' 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.474 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.474 [2024-07-16 00:32:33.049947] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:19.474 [2024-07-16 00:32:33.049996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132004 ] 00:21:19.474 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.474 [2024-07-16 00:32:33.104738] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.734 [2024-07-16 00:32:33.156897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.305 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.305 00:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.305 00:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kOCGRAvDPS 00:21:20.305 [2024-07-16 00:32:33.933829] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.305 [2024-07-16 00:32:33.933878] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.565 TLSTESTn1 00:21:20.565 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:20.826 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:20.826 "subsystems": [ 00:21:20.826 { 00:21:20.826 "subsystem": "keyring", 00:21:20.826 "config": [] 00:21:20.826 }, 00:21:20.826 { 00:21:20.826 "subsystem": "iobuf", 00:21:20.826 "config": [ 00:21:20.826 { 00:21:20.826 "method": "iobuf_set_options", 00:21:20.826 "params": { 00:21:20.826 "small_pool_count": 8192, 00:21:20.826 "large_pool_count": 1024, 00:21:20.826 "small_bufsize": 8192, 00:21:20.826 "large_bufsize": 135168 00:21:20.826 } 00:21:20.826 } 00:21:20.826 ] 00:21:20.826 }, 00:21:20.826 { 00:21:20.826 "subsystem": "sock", 00:21:20.826 "config": [ 00:21:20.826 { 00:21:20.826 "method": "sock_set_default_impl", 00:21:20.826 "params": { 00:21:20.826 "impl_name": "posix" 00:21:20.826 } 00:21:20.826 }, 00:21:20.826 { 00:21:20.826 "method": "sock_impl_set_options", 00:21:20.826 "params": { 00:21:20.826 "impl_name": "ssl", 00:21:20.826 "recv_buf_size": 4096, 00:21:20.826 "send_buf_size": 4096, 00:21:20.826 "enable_recv_pipe": true, 00:21:20.826 "enable_quickack": false, 00:21:20.826 "enable_placement_id": 0, 00:21:20.826 "enable_zerocopy_send_server": true, 00:21:20.826 "enable_zerocopy_send_client": false, 00:21:20.826 "zerocopy_threshold": 0, 00:21:20.826 "tls_version": 0, 00:21:20.826 "enable_ktls": false 00:21:20.826 } 00:21:20.826 }, 00:21:20.826 { 00:21:20.826 "method": "sock_impl_set_options", 00:21:20.826 "params": { 00:21:20.826 "impl_name": "posix", 00:21:20.826 "recv_buf_size": 2097152, 00:21:20.826 "send_buf_size": 2097152, 00:21:20.826 "enable_recv_pipe": true, 00:21:20.826 "enable_quickack": false, 00:21:20.826 "enable_placement_id": 0, 00:21:20.826 "enable_zerocopy_send_server": true, 00:21:20.827 "enable_zerocopy_send_client": false, 00:21:20.827 "zerocopy_threshold": 0, 00:21:20.827 "tls_version": 0, 00:21:20.827 "enable_ktls": false 00:21:20.827 } 00:21:20.827 } 00:21:20.827 ] 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "subsystem": "vmd", 00:21:20.827 "config": [] 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "subsystem": "accel", 00:21:20.827 "config": [ 00:21:20.827 { 00:21:20.827 "method": "accel_set_options", 00:21:20.827 "params": { 00:21:20.827 "small_cache_size": 128, 00:21:20.827 "large_cache_size": 16, 00:21:20.827 "task_count": 2048, 00:21:20.827 "sequence_count": 2048, 00:21:20.827 "buf_count": 2048 00:21:20.827 } 00:21:20.827 } 00:21:20.827 ] 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "subsystem": "bdev", 00:21:20.827 "config": [ 00:21:20.827 { 00:21:20.827 "method": "bdev_set_options", 00:21:20.827 "params": { 00:21:20.827 "bdev_io_pool_size": 65535, 00:21:20.827 "bdev_io_cache_size": 256, 00:21:20.827 "bdev_auto_examine": true, 00:21:20.827 "iobuf_small_cache_size": 128, 00:21:20.827 "iobuf_large_cache_size": 16 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "bdev_raid_set_options", 00:21:20.827 "params": { 00:21:20.827 "process_window_size_kb": 1024 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "bdev_iscsi_set_options", 00:21:20.827 "params": { 00:21:20.827 "timeout_sec": 30 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "bdev_nvme_set_options", 00:21:20.827 "params": { 00:21:20.827 "action_on_timeout": "none", 00:21:20.827 "timeout_us": 0, 00:21:20.827 "timeout_admin_us": 0, 00:21:20.827 "keep_alive_timeout_ms": 10000, 00:21:20.827 "arbitration_burst": 0, 00:21:20.827 "low_priority_weight": 0, 00:21:20.827 "medium_priority_weight": 0, 00:21:20.827 "high_priority_weight": 0, 00:21:20.827 "nvme_adminq_poll_period_us": 10000, 00:21:20.827 "nvme_ioq_poll_period_us": 0, 00:21:20.827 "io_queue_requests": 0, 00:21:20.827 "delay_cmd_submit": true, 00:21:20.827 "transport_retry_count": 4, 00:21:20.827 "bdev_retry_count": 3, 00:21:20.827 "transport_ack_timeout": 0, 00:21:20.827 "ctrlr_loss_timeout_sec": 0, 00:21:20.827 "reconnect_delay_sec": 0, 00:21:20.827 "fast_io_fail_timeout_sec": 0, 00:21:20.827 "disable_auto_failback": false, 00:21:20.827 "generate_uuids": false, 00:21:20.827 "transport_tos": 0, 00:21:20.827 "nvme_error_stat": false, 00:21:20.827 "rdma_srq_size": 0, 00:21:20.827 "io_path_stat": false, 00:21:20.827 "allow_accel_sequence": false, 00:21:20.827 "rdma_max_cq_size": 0, 00:21:20.827 "rdma_cm_event_timeout_ms": 0, 00:21:20.827 "dhchap_digests": [ 00:21:20.827 "sha256", 00:21:20.827 "sha384", 00:21:20.827 "sha512" 00:21:20.827 ], 00:21:20.827 "dhchap_dhgroups": [ 00:21:20.827 "null", 00:21:20.827 "ffdhe2048", 00:21:20.827 "ffdhe3072", 00:21:20.827 "ffdhe4096", 00:21:20.827 "ffdhe6144", 00:21:20.827 "ffdhe8192" 00:21:20.827 ] 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "bdev_nvme_set_hotplug", 00:21:20.827 "params": { 00:21:20.827 "period_us": 100000, 00:21:20.827 "enable": false 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "bdev_malloc_create", 00:21:20.827 "params": { 00:21:20.827 "name": "malloc0", 00:21:20.827 "num_blocks": 8192, 00:21:20.827 "block_size": 4096, 00:21:20.827 "physical_block_size": 4096, 00:21:20.827 "uuid": "794ca3c2-489e-4561-8936-db81f960eb38", 00:21:20.827 "optimal_io_boundary": 0 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "bdev_wait_for_examine" 00:21:20.827 } 00:21:20.827 ] 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "subsystem": "nbd", 00:21:20.827 "config": [] 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "subsystem": "scheduler", 00:21:20.827 "config": [ 00:21:20.827 { 00:21:20.827 "method": "framework_set_scheduler", 00:21:20.827 "params": { 00:21:20.827 "name": "static" 00:21:20.827 } 00:21:20.827 } 00:21:20.827 ] 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "subsystem": "nvmf", 00:21:20.827 "config": [ 00:21:20.827 { 00:21:20.827 "method": "nvmf_set_config", 00:21:20.827 "params": { 00:21:20.827 "discovery_filter": "match_any", 00:21:20.827 "admin_cmd_passthru": { 00:21:20.827 "identify_ctrlr": false 00:21:20.827 } 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "nvmf_set_max_subsystems", 00:21:20.827 "params": { 00:21:20.827 "max_subsystems": 1024 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "nvmf_set_crdt", 00:21:20.827 "params": { 00:21:20.827 "crdt1": 0, 00:21:20.827 "crdt2": 0, 00:21:20.827 "crdt3": 0 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "nvmf_create_transport", 00:21:20.827 "params": { 00:21:20.827 "trtype": "TCP", 00:21:20.827 "max_queue_depth": 128, 00:21:20.827 "max_io_qpairs_per_ctrlr": 127, 00:21:20.827 "in_capsule_data_size": 4096, 00:21:20.827 "max_io_size": 131072, 00:21:20.827 "io_unit_size": 131072, 00:21:20.827 "max_aq_depth": 128, 00:21:20.827 "num_shared_buffers": 511, 00:21:20.827 "buf_cache_size": 4294967295, 00:21:20.827 "dif_insert_or_strip": false, 00:21:20.827 "zcopy": false, 00:21:20.827 "c2h_success": false, 00:21:20.827 "sock_priority": 0, 00:21:20.827 "abort_timeout_sec": 1, 00:21:20.827 "ack_timeout": 0, 00:21:20.827 "data_wr_pool_size": 0 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "nvmf_create_subsystem", 00:21:20.827 "params": { 00:21:20.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.827 "allow_any_host": false, 00:21:20.827 "serial_number": "SPDK00000000000001", 00:21:20.827 "model_number": "SPDK bdev Controller", 00:21:20.827 "max_namespaces": 10, 00:21:20.827 "min_cntlid": 1, 00:21:20.827 "max_cntlid": 65519, 00:21:20.827 "ana_reporting": false 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "nvmf_subsystem_add_host", 00:21:20.827 "params": { 00:21:20.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.827 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.827 "psk": "/tmp/tmp.kOCGRAvDPS" 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "nvmf_subsystem_add_ns", 00:21:20.827 "params": { 00:21:20.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.827 "namespace": { 00:21:20.827 "nsid": 1, 00:21:20.827 "bdev_name": "malloc0", 00:21:20.827 "nguid": "794CA3C2489E45618936DB81F960EB38", 00:21:20.827 "uuid": "794ca3c2-489e-4561-8936-db81f960eb38", 00:21:20.827 "no_auto_visible": false 00:21:20.827 } 00:21:20.827 } 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "method": "nvmf_subsystem_add_listener", 00:21:20.827 "params": { 00:21:20.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.827 "listen_address": { 00:21:20.827 "trtype": "TCP", 00:21:20.827 "adrfam": "IPv4", 00:21:20.827 "traddr": "10.0.0.2", 00:21:20.827 "trsvcid": "4420" 00:21:20.827 }, 00:21:20.827 "secure_channel": true 00:21:20.827 } 00:21:20.827 } 00:21:20.827 ] 00:21:20.827 } 00:21:20.827 ] 00:21:20.827 }' 00:21:20.827 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:21.103 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:21.103 "subsystems": [ 00:21:21.103 { 00:21:21.103 "subsystem": "keyring", 00:21:21.103 "config": [] 00:21:21.103 }, 00:21:21.103 { 00:21:21.103 "subsystem": "iobuf", 00:21:21.103 "config": [ 00:21:21.103 { 00:21:21.103 "method": "iobuf_set_options", 00:21:21.103 "params": { 00:21:21.103 "small_pool_count": 8192, 00:21:21.103 "large_pool_count": 1024, 00:21:21.103 "small_bufsize": 8192, 00:21:21.103 "large_bufsize": 135168 00:21:21.103 } 00:21:21.103 } 00:21:21.103 ] 00:21:21.103 }, 00:21:21.103 { 00:21:21.103 "subsystem": "sock", 00:21:21.103 "config": [ 00:21:21.103 { 00:21:21.103 "method": "sock_set_default_impl", 00:21:21.103 "params": { 00:21:21.103 "impl_name": "posix" 00:21:21.103 } 00:21:21.103 }, 00:21:21.103 { 00:21:21.103 "method": "sock_impl_set_options", 00:21:21.103 "params": { 00:21:21.103 "impl_name": "ssl", 00:21:21.103 "recv_buf_size": 4096, 00:21:21.103 "send_buf_size": 4096, 00:21:21.103 "enable_recv_pipe": true, 00:21:21.103 "enable_quickack": false, 00:21:21.103 "enable_placement_id": 0, 00:21:21.103 "enable_zerocopy_send_server": true, 00:21:21.103 "enable_zerocopy_send_client": false, 00:21:21.103 "zerocopy_threshold": 0, 00:21:21.103 "tls_version": 0, 00:21:21.103 "enable_ktls": false 00:21:21.103 } 00:21:21.103 }, 00:21:21.103 { 00:21:21.103 "method": "sock_impl_set_options", 00:21:21.103 "params": { 00:21:21.103 "impl_name": "posix", 00:21:21.103 "recv_buf_size": 2097152, 00:21:21.103 "send_buf_size": 2097152, 00:21:21.103 "enable_recv_pipe": true, 00:21:21.103 "enable_quickack": false, 00:21:21.103 "enable_placement_id": 0, 00:21:21.103 "enable_zerocopy_send_server": true, 00:21:21.103 "enable_zerocopy_send_client": false, 00:21:21.103 "zerocopy_threshold": 0, 00:21:21.103 "tls_version": 0, 00:21:21.103 "enable_ktls": false 00:21:21.103 } 00:21:21.103 } 00:21:21.103 ] 00:21:21.103 }, 00:21:21.103 { 00:21:21.103 "subsystem": "vmd", 00:21:21.103 "config": [] 00:21:21.103 }, 00:21:21.103 { 00:21:21.103 "subsystem": "accel", 00:21:21.103 "config": [ 00:21:21.103 { 00:21:21.103 "method": "accel_set_options", 00:21:21.103 "params": { 00:21:21.103 "small_cache_size": 128, 00:21:21.103 "large_cache_size": 16, 00:21:21.103 "task_count": 2048, 00:21:21.103 "sequence_count": 2048, 00:21:21.103 "buf_count": 2048 00:21:21.103 } 00:21:21.103 } 00:21:21.103 ] 00:21:21.103 }, 00:21:21.103 { 00:21:21.103 "subsystem": "bdev", 00:21:21.103 "config": [ 00:21:21.103 { 00:21:21.104 "method": "bdev_set_options", 00:21:21.104 "params": { 00:21:21.104 "bdev_io_pool_size": 65535, 00:21:21.104 "bdev_io_cache_size": 256, 00:21:21.104 "bdev_auto_examine": true, 00:21:21.104 "iobuf_small_cache_size": 128, 00:21:21.104 "iobuf_large_cache_size": 16 00:21:21.104 } 00:21:21.104 }, 00:21:21.104 { 00:21:21.104 "method": "bdev_raid_set_options", 00:21:21.104 "params": { 00:21:21.104 "process_window_size_kb": 1024 00:21:21.104 } 00:21:21.104 }, 00:21:21.104 { 00:21:21.104 "method": "bdev_iscsi_set_options", 00:21:21.104 "params": { 00:21:21.104 "timeout_sec": 30 00:21:21.104 } 00:21:21.104 }, 00:21:21.104 { 00:21:21.104 "method": "bdev_nvme_set_options", 00:21:21.104 "params": { 00:21:21.104 "action_on_timeout": "none", 00:21:21.104 "timeout_us": 0, 00:21:21.104 "timeout_admin_us": 0, 00:21:21.104 "keep_alive_timeout_ms": 10000, 00:21:21.104 "arbitration_burst": 0, 00:21:21.104 "low_priority_weight": 0, 00:21:21.104 "medium_priority_weight": 0, 00:21:21.104 "high_priority_weight": 0, 00:21:21.104 "nvme_adminq_poll_period_us": 10000, 00:21:21.104 "nvme_ioq_poll_period_us": 0, 00:21:21.104 "io_queue_requests": 512, 00:21:21.104 "delay_cmd_submit": true, 00:21:21.104 "transport_retry_count": 4, 00:21:21.104 "bdev_retry_count": 3, 00:21:21.104 "transport_ack_timeout": 0, 00:21:21.104 "ctrlr_loss_timeout_sec": 0, 00:21:21.104 "reconnect_delay_sec": 0, 00:21:21.104 "fast_io_fail_timeout_sec": 0, 00:21:21.104 "disable_auto_failback": false, 00:21:21.104 "generate_uuids": false, 00:21:21.104 "transport_tos": 0, 00:21:21.104 "nvme_error_stat": false, 00:21:21.104 "rdma_srq_size": 0, 00:21:21.104 "io_path_stat": false, 00:21:21.104 "allow_accel_sequence": false, 00:21:21.104 "rdma_max_cq_size": 0, 00:21:21.104 "rdma_cm_event_timeout_ms": 0, 00:21:21.104 "dhchap_digests": [ 00:21:21.104 "sha256", 00:21:21.104 "sha384", 00:21:21.104 "sha512" 00:21:21.104 ], 00:21:21.104 "dhchap_dhgroups": [ 00:21:21.104 "null", 00:21:21.104 "ffdhe2048", 00:21:21.104 "ffdhe3072", 00:21:21.104 "ffdhe4096", 00:21:21.104 "ffdhe6144", 00:21:21.104 "ffdhe8192" 00:21:21.104 ] 00:21:21.104 } 00:21:21.104 }, 00:21:21.104 { 00:21:21.104 "method": "bdev_nvme_attach_controller", 00:21:21.104 "params": { 00:21:21.104 "name": "TLSTEST", 00:21:21.104 "trtype": "TCP", 00:21:21.104 "adrfam": "IPv4", 00:21:21.104 "traddr": "10.0.0.2", 00:21:21.104 "trsvcid": "4420", 00:21:21.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.104 "prchk_reftag": false, 00:21:21.104 "prchk_guard": false, 00:21:21.104 "ctrlr_loss_timeout_sec": 0, 00:21:21.104 "reconnect_delay_sec": 0, 00:21:21.104 "fast_io_fail_timeout_sec": 0, 00:21:21.104 "psk": "/tmp/tmp.kOCGRAvDPS", 00:21:21.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.104 "hdgst": false, 00:21:21.104 "ddgst": false 00:21:21.104 } 00:21:21.104 }, 00:21:21.104 { 00:21:21.104 "method": "bdev_nvme_set_hotplug", 00:21:21.104 "params": { 00:21:21.104 "period_us": 100000, 00:21:21.104 "enable": false 00:21:21.104 } 00:21:21.104 }, 00:21:21.104 { 00:21:21.104 "method": "bdev_wait_for_examine" 00:21:21.104 } 00:21:21.104 ] 00:21:21.104 }, 00:21:21.104 { 00:21:21.104 "subsystem": "nbd", 00:21:21.104 "config": [] 00:21:21.104 } 00:21:21.104 ] 00:21:21.104 }' 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1132004 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1132004 ']' 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1132004 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1132004 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1132004' 00:21:21.104 killing process with pid 1132004 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1132004 00:21:21.104 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.104 00:21:21.104 Latency(us) 00:21:21.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.104 =================================================================================================================== 00:21:21.104 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.104 [2024-07-16 00:32:34.570872] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1132004 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1131642 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1131642 ']' 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1131642 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.104 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131642 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131642' 00:21:21.366 killing process with pid 1131642 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1131642 00:21:21.366 [2024-07-16 00:32:34.739984] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1131642 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.366 00:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:21.366 "subsystems": [ 00:21:21.366 { 00:21:21.366 "subsystem": "keyring", 00:21:21.366 "config": [] 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "subsystem": "iobuf", 00:21:21.366 "config": [ 00:21:21.366 { 00:21:21.366 "method": "iobuf_set_options", 00:21:21.366 "params": { 00:21:21.366 "small_pool_count": 8192, 00:21:21.366 "large_pool_count": 1024, 00:21:21.366 "small_bufsize": 8192, 00:21:21.366 "large_bufsize": 135168 00:21:21.366 } 00:21:21.366 } 00:21:21.366 ] 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "subsystem": "sock", 00:21:21.366 "config": [ 00:21:21.366 { 00:21:21.366 "method": "sock_set_default_impl", 00:21:21.366 "params": { 00:21:21.366 "impl_name": "posix" 00:21:21.366 } 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "method": "sock_impl_set_options", 00:21:21.366 "params": { 00:21:21.366 "impl_name": "ssl", 00:21:21.366 "recv_buf_size": 4096, 00:21:21.366 "send_buf_size": 4096, 00:21:21.366 "enable_recv_pipe": true, 00:21:21.366 "enable_quickack": false, 00:21:21.366 "enable_placement_id": 0, 00:21:21.366 "enable_zerocopy_send_server": true, 00:21:21.366 "enable_zerocopy_send_client": false, 00:21:21.366 "zerocopy_threshold": 0, 00:21:21.366 "tls_version": 0, 00:21:21.366 "enable_ktls": false 00:21:21.366 } 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "method": "sock_impl_set_options", 00:21:21.366 "params": { 00:21:21.366 "impl_name": "posix", 00:21:21.366 "recv_buf_size": 2097152, 00:21:21.366 "send_buf_size": 2097152, 00:21:21.366 "enable_recv_pipe": true, 00:21:21.366 "enable_quickack": false, 00:21:21.366 "enable_placement_id": 0, 00:21:21.366 "enable_zerocopy_send_server": true, 00:21:21.366 "enable_zerocopy_send_client": false, 00:21:21.366 "zerocopy_threshold": 0, 00:21:21.366 "tls_version": 0, 00:21:21.366 "enable_ktls": false 00:21:21.366 } 00:21:21.366 } 00:21:21.366 ] 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "subsystem": "vmd", 00:21:21.366 "config": [] 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "subsystem": "accel", 00:21:21.366 "config": [ 00:21:21.366 { 00:21:21.366 "method": "accel_set_options", 00:21:21.366 "params": { 00:21:21.366 "small_cache_size": 128, 00:21:21.366 "large_cache_size": 16, 00:21:21.366 "task_count": 2048, 00:21:21.366 "sequence_count": 2048, 00:21:21.366 "buf_count": 2048 00:21:21.366 } 00:21:21.366 } 00:21:21.366 ] 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "subsystem": "bdev", 00:21:21.366 "config": [ 00:21:21.366 { 00:21:21.366 "method": "bdev_set_options", 00:21:21.366 "params": { 00:21:21.366 "bdev_io_pool_size": 65535, 00:21:21.366 "bdev_io_cache_size": 256, 00:21:21.366 "bdev_auto_examine": true, 00:21:21.366 "iobuf_small_cache_size": 128, 00:21:21.366 "iobuf_large_cache_size": 16 00:21:21.366 } 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "method": "bdev_raid_set_options", 00:21:21.366 "params": { 00:21:21.366 "process_window_size_kb": 1024 00:21:21.366 } 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "method": "bdev_iscsi_set_options", 00:21:21.366 "params": { 00:21:21.366 "timeout_sec": 30 00:21:21.366 } 00:21:21.366 }, 00:21:21.366 { 00:21:21.366 "method": "bdev_nvme_set_options", 00:21:21.366 "params": { 00:21:21.366 "action_on_timeout": "none", 00:21:21.366 "timeout_us": 0, 00:21:21.366 "timeout_admin_us": 0, 00:21:21.366 "keep_alive_timeout_ms": 10000, 00:21:21.366 "arbitration_burst": 0, 00:21:21.366 "low_priority_weight": 0, 00:21:21.366 "medium_priority_weight": 0, 00:21:21.366 "high_priority_weight": 0, 00:21:21.366 "nvme_adminq_poll_period_us": 10000, 00:21:21.366 "nvme_ioq_poll_period_us": 0, 00:21:21.366 "io_queue_requests": 0, 00:21:21.366 "delay_cmd_submit": true, 00:21:21.366 "transport_retry_count": 4, 00:21:21.366 "bdev_retry_count": 3, 00:21:21.366 "transport_ack_timeout": 0, 00:21:21.366 "ctrlr_loss_timeout_sec": 0, 00:21:21.366 "reconnect_delay_sec": 0, 00:21:21.366 "fast_io_fail_timeout_sec": 0, 00:21:21.366 "disable_auto_failback": false, 00:21:21.366 "generate_uuids": false, 00:21:21.366 "transport_tos": 0, 00:21:21.366 "nvme_error_stat": false, 00:21:21.366 "rdma_srq_size": 0, 00:21:21.366 "io_path_stat": false, 00:21:21.366 "allow_accel_sequence": false, 00:21:21.366 "rdma_max_cq_size": 0, 00:21:21.366 "rdma_cm_event_timeout_ms": 0, 00:21:21.366 "dhchap_digests": [ 00:21:21.366 "sha256", 00:21:21.366 "sha384", 00:21:21.366 "sha512" 00:21:21.366 ], 00:21:21.366 "dhchap_dhgroups": [ 00:21:21.366 "null", 00:21:21.366 "ffdhe2048", 00:21:21.366 "ffdhe3072", 00:21:21.366 "ffdhe4096", 00:21:21.366 "ffdhe6144", 00:21:21.367 "ffdhe8192" 00:21:21.367 ] 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "bdev_nvme_set_hotplug", 00:21:21.367 "params": { 00:21:21.367 "period_us": 100000, 00:21:21.367 "enable": false 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "bdev_malloc_create", 00:21:21.367 "params": { 00:21:21.367 "name": "malloc0", 00:21:21.367 "num_blocks": 8192, 00:21:21.367 "block_size": 4096, 00:21:21.367 "physical_block_size": 4096, 00:21:21.367 "uuid": "794ca3c2-489e-4561-8936-db81f960eb38", 00:21:21.367 "optimal_io_boundary": 0 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "bdev_wait_for_examine" 00:21:21.367 } 00:21:21.367 ] 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "subsystem": "nbd", 00:21:21.367 "config": [] 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "subsystem": "scheduler", 00:21:21.367 "config": [ 00:21:21.367 { 00:21:21.367 "method": "framework_set_scheduler", 00:21:21.367 "params": { 00:21:21.367 "name": "static" 00:21:21.367 } 00:21:21.367 } 00:21:21.367 ] 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "subsystem": "nvmf", 00:21:21.367 "config": [ 00:21:21.367 { 00:21:21.367 "method": "nvmf_set_config", 00:21:21.367 "params": { 00:21:21.367 "discovery_filter": "match_any", 00:21:21.367 "admin_cmd_passthru": { 00:21:21.367 "identify_ctrlr": false 00:21:21.367 } 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "nvmf_set_max_subsystems", 00:21:21.367 "params": { 00:21:21.367 "max_subsystems": 1024 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "nvmf_set_crdt", 00:21:21.367 "params": { 00:21:21.367 "crdt1": 0, 00:21:21.367 "crdt2": 0, 00:21:21.367 "crdt3": 0 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "nvmf_create_transport", 00:21:21.367 "params": { 00:21:21.367 "trtype": "TCP", 00:21:21.367 "max_queue_depth": 128, 00:21:21.367 "max_io_qpairs_per_ctrlr": 127, 00:21:21.367 "in_capsule_data_size": 4096, 00:21:21.367 "max_io_size": 131072, 00:21:21.367 "io_unit_size": 131072, 00:21:21.367 "max_aq_depth": 128, 00:21:21.367 "num_shared_buffers": 511, 00:21:21.367 "buf_cache_size": 4294967295, 00:21:21.367 "dif_insert_or_strip": false, 00:21:21.367 "zcopy": false, 00:21:21.367 "c2h_success": false, 00:21:21.367 "sock_priority": 0, 00:21:21.367 "abort_timeout_sec": 1, 00:21:21.367 "ack_timeout": 0, 00:21:21.367 "data_wr_pool_size": 0 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "nvmf_create_subsystem", 00:21:21.367 "params": { 00:21:21.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.367 "allow_any_host": false, 00:21:21.367 "serial_number": "SPDK00000000000001", 00:21:21.367 "model_number": "SPDK bdev Controller", 00:21:21.367 "max_namespaces": 10, 00:21:21.367 "min_cntlid": 1, 00:21:21.367 "max_cntlid": 65519, 00:21:21.367 "ana_reporting": false 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "nvmf_subsystem_add_host", 00:21:21.367 "params": { 00:21:21.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.367 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.367 "psk": "/tmp/tmp.kOCGRAvDPS" 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "nvmf_subsystem_add_ns", 00:21:21.367 "params": { 00:21:21.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.367 "namespace": { 00:21:21.367 "nsid": 1, 00:21:21.367 "bdev_name": "malloc0", 00:21:21.367 "nguid": "794CA3C2489E45618936DB81F960EB38", 00:21:21.367 "uuid": "794ca3c2-489e-4561-8936-db81f960eb38", 00:21:21.367 "no_auto_visible": false 00:21:21.367 } 00:21:21.367 } 00:21:21.367 }, 00:21:21.367 { 00:21:21.367 "method": "nvmf_subsystem_add_listener", 00:21:21.367 "params": { 00:21:21.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.367 "listen_address": { 00:21:21.367 "trtype": "TCP", 00:21:21.367 "adrfam": "IPv4", 00:21:21.367 "traddr": "10.0.0.2", 00:21:21.367 "trsvcid": "4420" 00:21:21.367 }, 00:21:21.367 "secure_channel": true 00:21:21.367 } 00:21:21.367 } 00:21:21.367 ] 00:21:21.367 } 00:21:21.367 ] 00:21:21.367 }' 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1132358 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1132358 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1132358 ']' 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.367 00:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.367 [2024-07-16 00:32:34.920548] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:21.367 [2024-07-16 00:32:34.920608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.367 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.628 [2024-07-16 00:32:35.006368] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.628 [2024-07-16 00:32:35.059729] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.628 [2024-07-16 00:32:35.059761] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.628 [2024-07-16 00:32:35.059766] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.628 [2024-07-16 00:32:35.059770] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.628 [2024-07-16 00:32:35.059774] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.628 [2024-07-16 00:32:35.059820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.628 [2024-07-16 00:32:35.243273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.628 [2024-07-16 00:32:35.259256] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:21.888 [2024-07-16 00:32:35.275300] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.888 [2024-07-16 00:32:35.283574] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1132559 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1132559 /var/tmp/bdevperf.sock 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1132559 ']' 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:22.149 00:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:22.149 "subsystems": [ 00:21:22.149 { 00:21:22.149 "subsystem": "keyring", 00:21:22.149 "config": [] 00:21:22.149 }, 00:21:22.149 { 00:21:22.149 "subsystem": "iobuf", 00:21:22.149 "config": [ 00:21:22.149 { 00:21:22.149 "method": "iobuf_set_options", 00:21:22.149 "params": { 00:21:22.149 "small_pool_count": 8192, 00:21:22.149 "large_pool_count": 1024, 00:21:22.149 "small_bufsize": 8192, 00:21:22.149 "large_bufsize": 135168 00:21:22.149 } 00:21:22.149 } 00:21:22.149 ] 00:21:22.149 }, 00:21:22.149 { 00:21:22.149 "subsystem": "sock", 00:21:22.149 "config": [ 00:21:22.149 { 00:21:22.149 "method": "sock_set_default_impl", 00:21:22.149 "params": { 00:21:22.149 "impl_name": "posix" 00:21:22.149 } 00:21:22.149 }, 00:21:22.149 { 00:21:22.149 "method": "sock_impl_set_options", 00:21:22.149 "params": { 00:21:22.149 "impl_name": "ssl", 00:21:22.149 "recv_buf_size": 4096, 00:21:22.149 "send_buf_size": 4096, 00:21:22.149 "enable_recv_pipe": true, 00:21:22.149 "enable_quickack": false, 00:21:22.149 "enable_placement_id": 0, 00:21:22.149 "enable_zerocopy_send_server": true, 00:21:22.149 "enable_zerocopy_send_client": false, 00:21:22.149 "zerocopy_threshold": 0, 00:21:22.149 "tls_version": 0, 00:21:22.150 "enable_ktls": false 00:21:22.150 } 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "method": "sock_impl_set_options", 00:21:22.150 "params": { 00:21:22.150 "impl_name": "posix", 00:21:22.150 "recv_buf_size": 2097152, 00:21:22.150 "send_buf_size": 2097152, 00:21:22.150 "enable_recv_pipe": true, 00:21:22.150 "enable_quickack": false, 00:21:22.150 "enable_placement_id": 0, 00:21:22.150 "enable_zerocopy_send_server": true, 00:21:22.150 "enable_zerocopy_send_client": false, 00:21:22.150 "zerocopy_threshold": 0, 00:21:22.150 "tls_version": 0, 00:21:22.150 "enable_ktls": false 00:21:22.150 } 00:21:22.150 } 00:21:22.150 ] 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "subsystem": "vmd", 00:21:22.150 "config": [] 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "subsystem": "accel", 00:21:22.150 "config": [ 00:21:22.150 { 00:21:22.150 "method": "accel_set_options", 00:21:22.150 "params": { 00:21:22.150 "small_cache_size": 128, 00:21:22.150 "large_cache_size": 16, 00:21:22.150 "task_count": 2048, 00:21:22.150 "sequence_count": 2048, 00:21:22.150 "buf_count": 2048 00:21:22.150 } 00:21:22.150 } 00:21:22.150 ] 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "subsystem": "bdev", 00:21:22.150 "config": [ 00:21:22.150 { 00:21:22.150 "method": "bdev_set_options", 00:21:22.150 "params": { 00:21:22.150 "bdev_io_pool_size": 65535, 00:21:22.150 "bdev_io_cache_size": 256, 00:21:22.150 "bdev_auto_examine": true, 00:21:22.150 "iobuf_small_cache_size": 128, 00:21:22.150 "iobuf_large_cache_size": 16 00:21:22.150 } 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "method": "bdev_raid_set_options", 00:21:22.150 "params": { 00:21:22.150 "process_window_size_kb": 1024 00:21:22.150 } 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "method": "bdev_iscsi_set_options", 00:21:22.150 "params": { 00:21:22.150 "timeout_sec": 30 00:21:22.150 } 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "method": "bdev_nvme_set_options", 00:21:22.150 "params": { 00:21:22.150 "action_on_timeout": "none", 00:21:22.150 "timeout_us": 0, 00:21:22.150 "timeout_admin_us": 0, 00:21:22.150 "keep_alive_timeout_ms": 10000, 00:21:22.150 "arbitration_burst": 0, 00:21:22.150 "low_priority_weight": 0, 00:21:22.150 "medium_priority_weight": 0, 00:21:22.150 "high_priority_weight": 0, 00:21:22.150 "nvme_adminq_poll_period_us": 10000, 00:21:22.150 "nvme_ioq_poll_period_us": 0, 00:21:22.150 "io_queue_requests": 512, 00:21:22.150 "delay_cmd_submit": true, 00:21:22.150 "transport_retry_count": 4, 00:21:22.150 "bdev_retry_count": 3, 00:21:22.150 "transport_ack_timeout": 0, 00:21:22.150 "ctrlr_loss_timeout_sec": 0, 00:21:22.150 "reconnect_delay_sec": 0, 00:21:22.150 "fast_io_fail_timeout_sec": 0, 00:21:22.150 "disable_auto_failback": false, 00:21:22.150 "generate_uuids": false, 00:21:22.150 "transport_tos": 0, 00:21:22.150 "nvme_error_stat": false, 00:21:22.150 "rdma_srq_size": 0, 00:21:22.150 "io_path_stat": false, 00:21:22.150 "allow_accel_sequence": false, 00:21:22.150 "rdma_max_cq_size": 0, 00:21:22.150 "rdma_cm_event_timeout_ms": 0, 00:21:22.150 "dhchap_digests": [ 00:21:22.150 "sha256", 00:21:22.150 "sha384", 00:21:22.150 "sha512" 00:21:22.150 ], 00:21:22.150 "dhchap_dhgroups": [ 00:21:22.150 "null", 00:21:22.150 "ffdhe2048", 00:21:22.150 "ffdhe3072", 00:21:22.150 "ffdhe4096", 00:21:22.150 "ffdhe6144", 00:21:22.150 "ffdhe8192" 00:21:22.150 ] 00:21:22.150 } 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "method": "bdev_nvme_attach_controller", 00:21:22.150 "params": { 00:21:22.150 "name": "TLSTEST", 00:21:22.150 "trtype": "TCP", 00:21:22.150 "adrfam": "IPv4", 00:21:22.150 "traddr": "10.0.0.2", 00:21:22.150 "trsvcid": "4420", 00:21:22.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.150 "prchk_reftag": false, 00:21:22.150 "prchk_guard": false, 00:21:22.150 "ctrlr_loss_timeout_sec": 0, 00:21:22.150 "reconnect_delay_sec": 0, 00:21:22.150 "fast_io_fail_timeout_sec": 0, 00:21:22.150 "psk": "/tmp/tmp.kOCGRAvDPS", 00:21:22.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.150 "hdgst": false, 00:21:22.150 "ddgst": false 00:21:22.150 } 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "method": "bdev_nvme_set_hotplug", 00:21:22.150 "params": { 00:21:22.150 "period_us": 100000, 00:21:22.150 "enable": false 00:21:22.150 } 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "method": "bdev_wait_for_examine" 00:21:22.150 } 00:21:22.150 ] 00:21:22.150 }, 00:21:22.150 { 00:21:22.150 "subsystem": "nbd", 00:21:22.150 "config": [] 00:21:22.150 } 00:21:22.150 ] 00:21:22.150 }' 00:21:22.150 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.150 00:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.150 [2024-07-16 00:32:35.766450] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:22.150 [2024-07-16 00:32:35.766503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132559 ] 00:21:22.410 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.410 [2024-07-16 00:32:35.821371] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.410 [2024-07-16 00:32:35.873864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.410 [2024-07-16 00:32:35.998483] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.410 [2024-07-16 00:32:35.998545] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:22.978 00:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.978 00:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.978 00:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:22.978 Running I/O for 10 seconds... 00:21:35.199 00:21:35.199 Latency(us) 00:21:35.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.199 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:35.199 Verification LBA range: start 0x0 length 0x2000 00:21:35.199 TLSTESTn1 : 10.01 5230.03 20.43 0.00 0.00 24433.59 6826.67 56360.96 00:21:35.199 =================================================================================================================== 00:21:35.199 Total : 5230.03 20.43 0.00 0.00 24433.59 6826.67 56360.96 00:21:35.199 0 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1132559 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1132559 ']' 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1132559 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1132559 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1132559' 00:21:35.199 killing process with pid 1132559 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1132559 00:21:35.199 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.199 00:21:35.199 Latency(us) 00:21:35.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.199 =================================================================================================================== 00:21:35.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.199 [2024-07-16 00:32:46.722156] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1132559 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1132358 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1132358 ']' 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1132358 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1132358 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1132358' 00:21:35.199 killing process with pid 1132358 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1132358 00:21:35.199 [2024-07-16 00:32:46.889828] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.199 00:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1132358 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1134732 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1134732 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1134732 ']' 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.199 [2024-07-16 00:32:47.070602] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:35.199 [2024-07-16 00:32:47.070660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.199 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.199 [2024-07-16 00:32:47.142723] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.199 [2024-07-16 00:32:47.207532] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.199 [2024-07-16 00:32:47.207570] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.199 [2024-07-16 00:32:47.207578] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.199 [2024-07-16 00:32:47.207584] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.199 [2024-07-16 00:32:47.207590] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.199 [2024-07-16 00:32:47.207613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.kOCGRAvDPS 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kOCGRAvDPS 00:21:35.199 00:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:35.199 [2024-07-16 00:32:48.002164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:35.199 [2024-07-16 00:32:48.294885] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.199 [2024-07-16 00:32:48.295079] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:35.199 malloc0 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kOCGRAvDPS 00:21:35.199 [2024-07-16 00:32:48.758911] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1135085 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1135085 /var/tmp/bdevperf.sock 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1135085 ']' 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.199 00:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.200 00:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.200 00:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.460 [2024-07-16 00:32:48.835752] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:35.460 [2024-07-16 00:32:48.835819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135085 ] 00:21:35.460 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.460 [2024-07-16 00:32:48.918105] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.460 [2024-07-16 00:32:48.972513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.031 00:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.031 00:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:36.031 00:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kOCGRAvDPS 00:21:36.292 00:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:36.292 [2024-07-16 00:32:49.874798] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.552 nvme0n1 00:21:36.552 00:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:36.552 Running I/O for 1 seconds... 00:21:37.494 00:21:37.494 Latency(us) 00:21:37.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.494 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:37.494 Verification LBA range: start 0x0 length 0x2000 00:21:37.494 nvme0n1 : 1.02 4930.94 19.26 0.00 0.00 25724.10 4369.07 36263.25 00:21:37.494 =================================================================================================================== 00:21:37.494 Total : 4930.94 19.26 0.00 0.00 25724.10 4369.07 36263.25 00:21:37.494 0 00:21:37.494 00:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1135085 00:21:37.494 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1135085 ']' 00:21:37.494 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1135085 00:21:37.494 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:37.494 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.494 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1135085 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1135085' 00:21:37.755 killing process with pid 1135085 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1135085 00:21:37.755 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.755 00:21:37.755 Latency(us) 00:21:37.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.755 =================================================================================================================== 00:21:37.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1135085 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1134732 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1134732 ']' 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1134732 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1134732 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1134732' 00:21:37.755 killing process with pid 1134732 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1134732 00:21:37.755 [2024-07-16 00:32:51.328882] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:37.755 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1134732 00:21:38.015 00:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1135662 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1135662 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1135662 ']' 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.016 00:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.016 [2024-07-16 00:32:51.497178] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:38.016 [2024-07-16 00:32:51.497223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.016 [2024-07-16 00:32:51.557672] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.016 [2024-07-16 00:32:51.621248] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.016 [2024-07-16 00:32:51.621286] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.016 [2024-07-16 00:32:51.621294] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.016 [2024-07-16 00:32:51.621301] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.016 [2024-07-16 00:32:51.621306] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.016 [2024-07-16 00:32:51.621324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.957 [2024-07-16 00:32:52.319954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.957 malloc0 00:21:38.957 [2024-07-16 00:32:52.346710] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.957 [2024-07-16 00:32:52.346911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1135797 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1135797 /var/tmp/bdevperf.sock 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1135797 ']' 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.957 00:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.957 [2024-07-16 00:32:52.397996] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:38.957 [2024-07-16 00:32:52.398034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135797 ] 00:21:38.957 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.957 [2024-07-16 00:32:52.471921] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.957 [2024-07-16 00:32:52.525592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.995 00:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.995 00:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.995 00:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kOCGRAvDPS 00:21:39.995 00:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:39.995 [2024-07-16 00:32:53.475889] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.995 nvme0n1 00:21:39.995 00:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.276 Running I/O for 1 seconds... 00:21:41.217 00:21:41.217 Latency(us) 00:21:41.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.217 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:41.217 Verification LBA range: start 0x0 length 0x2000 00:21:41.217 nvme0n1 : 1.06 2767.65 10.81 0.00 0.00 45067.24 6034.77 56797.87 00:21:41.217 =================================================================================================================== 00:21:41.217 Total : 2767.65 10.81 0.00 0.00 45067.24 6034.77 56797.87 00:21:41.217 0 00:21:41.217 00:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:41.217 00:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.217 00:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.478 00:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.478 00:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:41.478 "subsystems": [ 00:21:41.478 { 00:21:41.478 "subsystem": "keyring", 00:21:41.479 "config": [ 00:21:41.479 { 00:21:41.479 "method": "keyring_file_add_key", 00:21:41.479 "params": { 00:21:41.479 "name": "key0", 00:21:41.479 "path": "/tmp/tmp.kOCGRAvDPS" 00:21:41.479 } 00:21:41.479 } 00:21:41.479 ] 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "subsystem": "iobuf", 00:21:41.479 "config": [ 00:21:41.479 { 00:21:41.479 "method": "iobuf_set_options", 00:21:41.479 "params": { 00:21:41.479 "small_pool_count": 8192, 00:21:41.479 "large_pool_count": 1024, 00:21:41.479 "small_bufsize": 8192, 00:21:41.479 "large_bufsize": 135168 00:21:41.479 } 00:21:41.479 } 00:21:41.479 ] 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "subsystem": "sock", 00:21:41.479 "config": [ 00:21:41.479 { 00:21:41.479 "method": "sock_set_default_impl", 00:21:41.479 "params": { 00:21:41.479 "impl_name": "posix" 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "sock_impl_set_options", 00:21:41.479 "params": { 00:21:41.479 "impl_name": "ssl", 00:21:41.479 "recv_buf_size": 4096, 00:21:41.479 "send_buf_size": 4096, 00:21:41.479 "enable_recv_pipe": true, 00:21:41.479 "enable_quickack": false, 00:21:41.479 "enable_placement_id": 0, 00:21:41.479 "enable_zerocopy_send_server": true, 00:21:41.479 "enable_zerocopy_send_client": false, 00:21:41.479 "zerocopy_threshold": 0, 00:21:41.479 "tls_version": 0, 00:21:41.479 "enable_ktls": false 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "sock_impl_set_options", 00:21:41.479 "params": { 00:21:41.479 "impl_name": "posix", 00:21:41.479 "recv_buf_size": 2097152, 00:21:41.479 "send_buf_size": 2097152, 00:21:41.479 "enable_recv_pipe": true, 00:21:41.479 "enable_quickack": false, 00:21:41.479 "enable_placement_id": 0, 00:21:41.479 "enable_zerocopy_send_server": true, 00:21:41.479 "enable_zerocopy_send_client": false, 00:21:41.479 "zerocopy_threshold": 0, 00:21:41.479 "tls_version": 0, 00:21:41.479 "enable_ktls": false 00:21:41.479 } 00:21:41.479 } 00:21:41.479 ] 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "subsystem": "vmd", 00:21:41.479 "config": [] 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "subsystem": "accel", 00:21:41.479 "config": [ 00:21:41.479 { 00:21:41.479 "method": "accel_set_options", 00:21:41.479 "params": { 00:21:41.479 "small_cache_size": 128, 00:21:41.479 "large_cache_size": 16, 00:21:41.479 "task_count": 2048, 00:21:41.479 "sequence_count": 2048, 00:21:41.479 "buf_count": 2048 00:21:41.479 } 00:21:41.479 } 00:21:41.479 ] 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "subsystem": "bdev", 00:21:41.479 "config": [ 00:21:41.479 { 00:21:41.479 "method": "bdev_set_options", 00:21:41.479 "params": { 00:21:41.479 "bdev_io_pool_size": 65535, 00:21:41.479 "bdev_io_cache_size": 256, 00:21:41.479 "bdev_auto_examine": true, 00:21:41.479 "iobuf_small_cache_size": 128, 00:21:41.479 "iobuf_large_cache_size": 16 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "bdev_raid_set_options", 00:21:41.479 "params": { 00:21:41.479 "process_window_size_kb": 1024 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "bdev_iscsi_set_options", 00:21:41.479 "params": { 00:21:41.479 "timeout_sec": 30 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "bdev_nvme_set_options", 00:21:41.479 "params": { 00:21:41.479 "action_on_timeout": "none", 00:21:41.479 "timeout_us": 0, 00:21:41.479 "timeout_admin_us": 0, 00:21:41.479 "keep_alive_timeout_ms": 10000, 00:21:41.479 "arbitration_burst": 0, 00:21:41.479 "low_priority_weight": 0, 00:21:41.479 "medium_priority_weight": 0, 00:21:41.479 "high_priority_weight": 0, 00:21:41.479 "nvme_adminq_poll_period_us": 10000, 00:21:41.479 "nvme_ioq_poll_period_us": 0, 00:21:41.479 "io_queue_requests": 0, 00:21:41.479 "delay_cmd_submit": true, 00:21:41.479 "transport_retry_count": 4, 00:21:41.479 "bdev_retry_count": 3, 00:21:41.479 "transport_ack_timeout": 0, 00:21:41.479 "ctrlr_loss_timeout_sec": 0, 00:21:41.479 "reconnect_delay_sec": 0, 00:21:41.479 "fast_io_fail_timeout_sec": 0, 00:21:41.479 "disable_auto_failback": false, 00:21:41.479 "generate_uuids": false, 00:21:41.479 "transport_tos": 0, 00:21:41.479 "nvme_error_stat": false, 00:21:41.479 "rdma_srq_size": 0, 00:21:41.479 "io_path_stat": false, 00:21:41.479 "allow_accel_sequence": false, 00:21:41.479 "rdma_max_cq_size": 0, 00:21:41.479 "rdma_cm_event_timeout_ms": 0, 00:21:41.479 "dhchap_digests": [ 00:21:41.479 "sha256", 00:21:41.479 "sha384", 00:21:41.479 "sha512" 00:21:41.479 ], 00:21:41.479 "dhchap_dhgroups": [ 00:21:41.479 "null", 00:21:41.479 "ffdhe2048", 00:21:41.479 "ffdhe3072", 00:21:41.479 "ffdhe4096", 00:21:41.479 "ffdhe6144", 00:21:41.479 "ffdhe8192" 00:21:41.479 ] 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "bdev_nvme_set_hotplug", 00:21:41.479 "params": { 00:21:41.479 "period_us": 100000, 00:21:41.479 "enable": false 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "bdev_malloc_create", 00:21:41.479 "params": { 00:21:41.479 "name": "malloc0", 00:21:41.479 "num_blocks": 8192, 00:21:41.479 "block_size": 4096, 00:21:41.479 "physical_block_size": 4096, 00:21:41.479 "uuid": "e51ace55-5d68-4524-8a8d-b8c129e93513", 00:21:41.479 "optimal_io_boundary": 0 00:21:41.479 } 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "method": "bdev_wait_for_examine" 00:21:41.479 } 00:21:41.479 ] 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "subsystem": "nbd", 00:21:41.479 "config": [] 00:21:41.479 }, 00:21:41.479 { 00:21:41.479 "subsystem": "scheduler", 00:21:41.479 "config": [ 00:21:41.479 { 00:21:41.479 "method": "framework_set_scheduler", 00:21:41.480 "params": { 00:21:41.480 "name": "static" 00:21:41.480 } 00:21:41.480 } 00:21:41.480 ] 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "subsystem": "nvmf", 00:21:41.480 "config": [ 00:21:41.480 { 00:21:41.480 "method": "nvmf_set_config", 00:21:41.480 "params": { 00:21:41.480 "discovery_filter": "match_any", 00:21:41.480 "admin_cmd_passthru": { 00:21:41.480 "identify_ctrlr": false 00:21:41.480 } 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "nvmf_set_max_subsystems", 00:21:41.480 "params": { 00:21:41.480 "max_subsystems": 1024 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "nvmf_set_crdt", 00:21:41.480 "params": { 00:21:41.480 "crdt1": 0, 00:21:41.480 "crdt2": 0, 00:21:41.480 "crdt3": 0 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "nvmf_create_transport", 00:21:41.480 "params": { 00:21:41.480 "trtype": "TCP", 00:21:41.480 "max_queue_depth": 128, 00:21:41.480 "max_io_qpairs_per_ctrlr": 127, 00:21:41.480 "in_capsule_data_size": 4096, 00:21:41.480 "max_io_size": 131072, 00:21:41.480 "io_unit_size": 131072, 00:21:41.480 "max_aq_depth": 128, 00:21:41.480 "num_shared_buffers": 511, 00:21:41.480 "buf_cache_size": 4294967295, 00:21:41.480 "dif_insert_or_strip": false, 00:21:41.480 "zcopy": false, 00:21:41.480 "c2h_success": false, 00:21:41.480 "sock_priority": 0, 00:21:41.480 "abort_timeout_sec": 1, 00:21:41.480 "ack_timeout": 0, 00:21:41.480 "data_wr_pool_size": 0 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "nvmf_create_subsystem", 00:21:41.480 "params": { 00:21:41.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.480 "allow_any_host": false, 00:21:41.480 "serial_number": "00000000000000000000", 00:21:41.480 "model_number": "SPDK bdev Controller", 00:21:41.480 "max_namespaces": 32, 00:21:41.480 "min_cntlid": 1, 00:21:41.480 "max_cntlid": 65519, 00:21:41.480 "ana_reporting": false 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "nvmf_subsystem_add_host", 00:21:41.480 "params": { 00:21:41.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.480 "host": "nqn.2016-06.io.spdk:host1", 00:21:41.480 "psk": "key0" 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "nvmf_subsystem_add_ns", 00:21:41.480 "params": { 00:21:41.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.480 "namespace": { 00:21:41.480 "nsid": 1, 00:21:41.480 "bdev_name": "malloc0", 00:21:41.480 "nguid": "E51ACE555D6845248A8DB8C129E93513", 00:21:41.480 "uuid": "e51ace55-5d68-4524-8a8d-b8c129e93513", 00:21:41.480 "no_auto_visible": false 00:21:41.480 } 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "nvmf_subsystem_add_listener", 00:21:41.480 "params": { 00:21:41.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.480 "listen_address": { 00:21:41.480 "trtype": "TCP", 00:21:41.480 "adrfam": "IPv4", 00:21:41.480 "traddr": "10.0.0.2", 00:21:41.480 "trsvcid": "4420" 00:21:41.480 }, 00:21:41.480 "secure_channel": false, 00:21:41.480 "sock_impl": "ssl" 00:21:41.480 } 00:21:41.480 } 00:21:41.480 ] 00:21:41.480 } 00:21:41.480 ] 00:21:41.480 }' 00:21:41.480 00:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:41.480 00:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:41.480 "subsystems": [ 00:21:41.480 { 00:21:41.480 "subsystem": "keyring", 00:21:41.480 "config": [ 00:21:41.480 { 00:21:41.480 "method": "keyring_file_add_key", 00:21:41.480 "params": { 00:21:41.480 "name": "key0", 00:21:41.480 "path": "/tmp/tmp.kOCGRAvDPS" 00:21:41.480 } 00:21:41.480 } 00:21:41.480 ] 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "subsystem": "iobuf", 00:21:41.480 "config": [ 00:21:41.480 { 00:21:41.480 "method": "iobuf_set_options", 00:21:41.480 "params": { 00:21:41.480 "small_pool_count": 8192, 00:21:41.480 "large_pool_count": 1024, 00:21:41.480 "small_bufsize": 8192, 00:21:41.480 "large_bufsize": 135168 00:21:41.480 } 00:21:41.480 } 00:21:41.480 ] 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "subsystem": "sock", 00:21:41.480 "config": [ 00:21:41.480 { 00:21:41.480 "method": "sock_set_default_impl", 00:21:41.480 "params": { 00:21:41.480 "impl_name": "posix" 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "sock_impl_set_options", 00:21:41.480 "params": { 00:21:41.480 "impl_name": "ssl", 00:21:41.480 "recv_buf_size": 4096, 00:21:41.480 "send_buf_size": 4096, 00:21:41.480 "enable_recv_pipe": true, 00:21:41.480 "enable_quickack": false, 00:21:41.480 "enable_placement_id": 0, 00:21:41.480 "enable_zerocopy_send_server": true, 00:21:41.480 "enable_zerocopy_send_client": false, 00:21:41.480 "zerocopy_threshold": 0, 00:21:41.480 "tls_version": 0, 00:21:41.480 "enable_ktls": false 00:21:41.480 } 00:21:41.480 }, 00:21:41.480 { 00:21:41.480 "method": "sock_impl_set_options", 00:21:41.480 "params": { 00:21:41.480 "impl_name": "posix", 00:21:41.480 "recv_buf_size": 2097152, 00:21:41.480 "send_buf_size": 2097152, 00:21:41.480 "enable_recv_pipe": true, 00:21:41.480 "enable_quickack": false, 00:21:41.480 "enable_placement_id": 0, 00:21:41.480 "enable_zerocopy_send_server": true, 00:21:41.480 "enable_zerocopy_send_client": false, 00:21:41.480 "zerocopy_threshold": 0, 00:21:41.480 "tls_version": 0, 00:21:41.481 "enable_ktls": false 00:21:41.481 } 00:21:41.481 } 00:21:41.481 ] 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "subsystem": "vmd", 00:21:41.481 "config": [] 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "subsystem": "accel", 00:21:41.481 "config": [ 00:21:41.481 { 00:21:41.481 "method": "accel_set_options", 00:21:41.481 "params": { 00:21:41.481 "small_cache_size": 128, 00:21:41.481 "large_cache_size": 16, 00:21:41.481 "task_count": 2048, 00:21:41.481 "sequence_count": 2048, 00:21:41.481 "buf_count": 2048 00:21:41.481 } 00:21:41.481 } 00:21:41.481 ] 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "subsystem": "bdev", 00:21:41.481 "config": [ 00:21:41.481 { 00:21:41.481 "method": "bdev_set_options", 00:21:41.481 "params": { 00:21:41.481 "bdev_io_pool_size": 65535, 00:21:41.481 "bdev_io_cache_size": 256, 00:21:41.481 "bdev_auto_examine": true, 00:21:41.481 "iobuf_small_cache_size": 128, 00:21:41.481 "iobuf_large_cache_size": 16 00:21:41.481 } 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "method": "bdev_raid_set_options", 00:21:41.481 "params": { 00:21:41.481 "process_window_size_kb": 1024 00:21:41.481 } 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "method": "bdev_iscsi_set_options", 00:21:41.481 "params": { 00:21:41.481 "timeout_sec": 30 00:21:41.481 } 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "method": "bdev_nvme_set_options", 00:21:41.481 "params": { 00:21:41.481 "action_on_timeout": "none", 00:21:41.481 "timeout_us": 0, 00:21:41.481 "timeout_admin_us": 0, 00:21:41.481 "keep_alive_timeout_ms": 10000, 00:21:41.481 "arbitration_burst": 0, 00:21:41.481 "low_priority_weight": 0, 00:21:41.481 "medium_priority_weight": 0, 00:21:41.481 "high_priority_weight": 0, 00:21:41.481 "nvme_adminq_poll_period_us": 10000, 00:21:41.481 "nvme_ioq_poll_period_us": 0, 00:21:41.481 "io_queue_requests": 512, 00:21:41.481 "delay_cmd_submit": true, 00:21:41.481 "transport_retry_count": 4, 00:21:41.481 "bdev_retry_count": 3, 00:21:41.481 "transport_ack_timeout": 0, 00:21:41.481 "ctrlr_loss_timeout_sec": 0, 00:21:41.481 "reconnect_delay_sec": 0, 00:21:41.481 "fast_io_fail_timeout_sec": 0, 00:21:41.481 "disable_auto_failback": false, 00:21:41.481 "generate_uuids": false, 00:21:41.481 "transport_tos": 0, 00:21:41.481 "nvme_error_stat": false, 00:21:41.481 "rdma_srq_size": 0, 00:21:41.481 "io_path_stat": false, 00:21:41.481 "allow_accel_sequence": false, 00:21:41.481 "rdma_max_cq_size": 0, 00:21:41.481 "rdma_cm_event_timeout_ms": 0, 00:21:41.481 "dhchap_digests": [ 00:21:41.481 "sha256", 00:21:41.481 "sha384", 00:21:41.481 "sha512" 00:21:41.481 ], 00:21:41.481 "dhchap_dhgroups": [ 00:21:41.481 "null", 00:21:41.481 "ffdhe2048", 00:21:41.481 "ffdhe3072", 00:21:41.481 "ffdhe4096", 00:21:41.481 "ffdhe6144", 00:21:41.481 "ffdhe8192" 00:21:41.481 ] 00:21:41.481 } 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "method": "bdev_nvme_attach_controller", 00:21:41.481 "params": { 00:21:41.481 "name": "nvme0", 00:21:41.481 "trtype": "TCP", 00:21:41.481 "adrfam": "IPv4", 00:21:41.481 "traddr": "10.0.0.2", 00:21:41.481 "trsvcid": "4420", 00:21:41.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.481 "prchk_reftag": false, 00:21:41.481 "prchk_guard": false, 00:21:41.481 "ctrlr_loss_timeout_sec": 0, 00:21:41.481 "reconnect_delay_sec": 0, 00:21:41.481 "fast_io_fail_timeout_sec": 0, 00:21:41.481 "psk": "key0", 00:21:41.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.481 "hdgst": false, 00:21:41.481 "ddgst": false 00:21:41.481 } 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "method": "bdev_nvme_set_hotplug", 00:21:41.481 "params": { 00:21:41.481 "period_us": 100000, 00:21:41.481 "enable": false 00:21:41.481 } 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "method": "bdev_enable_histogram", 00:21:41.481 "params": { 00:21:41.481 "name": "nvme0n1", 00:21:41.481 "enable": true 00:21:41.481 } 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "method": "bdev_wait_for_examine" 00:21:41.481 } 00:21:41.481 ] 00:21:41.481 }, 00:21:41.481 { 00:21:41.481 "subsystem": "nbd", 00:21:41.481 "config": [] 00:21:41.481 } 00:21:41.481 ] 00:21:41.481 }' 00:21:41.481 00:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1135797 00:21:41.481 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1135797 ']' 00:21:41.481 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1135797 00:21:41.481 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:41.481 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.481 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1135797 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1135797' 00:21:41.742 killing process with pid 1135797 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1135797 00:21:41.742 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.742 00:21:41.742 Latency(us) 00:21:41.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.742 =================================================================================================================== 00:21:41.742 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1135797 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1135662 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1135662 ']' 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1135662 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1135662 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1135662' 00:21:41.742 killing process with pid 1135662 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1135662 00:21:41.742 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1135662 00:21:42.004 00:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:42.004 00:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.004 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.004 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.004 00:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:42.004 "subsystems": [ 00:21:42.004 { 00:21:42.004 "subsystem": "keyring", 00:21:42.004 "config": [ 00:21:42.004 { 00:21:42.004 "method": "keyring_file_add_key", 00:21:42.004 "params": { 00:21:42.004 "name": "key0", 00:21:42.004 "path": "/tmp/tmp.kOCGRAvDPS" 00:21:42.004 } 00:21:42.004 } 00:21:42.004 ] 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "subsystem": "iobuf", 00:21:42.004 "config": [ 00:21:42.004 { 00:21:42.004 "method": "iobuf_set_options", 00:21:42.004 "params": { 00:21:42.004 "small_pool_count": 8192, 00:21:42.004 "large_pool_count": 1024, 00:21:42.004 "small_bufsize": 8192, 00:21:42.004 "large_bufsize": 135168 00:21:42.004 } 00:21:42.004 } 00:21:42.004 ] 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "subsystem": "sock", 00:21:42.004 "config": [ 00:21:42.004 { 00:21:42.004 "method": "sock_set_default_impl", 00:21:42.004 "params": { 00:21:42.004 "impl_name": "posix" 00:21:42.004 } 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "method": "sock_impl_set_options", 00:21:42.004 "params": { 00:21:42.004 "impl_name": "ssl", 00:21:42.004 "recv_buf_size": 4096, 00:21:42.004 "send_buf_size": 4096, 00:21:42.004 "enable_recv_pipe": true, 00:21:42.004 "enable_quickack": false, 00:21:42.004 "enable_placement_id": 0, 00:21:42.004 "enable_zerocopy_send_server": true, 00:21:42.004 "enable_zerocopy_send_client": false, 00:21:42.004 "zerocopy_threshold": 0, 00:21:42.004 "tls_version": 0, 00:21:42.004 "enable_ktls": false 00:21:42.004 } 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "method": "sock_impl_set_options", 00:21:42.004 "params": { 00:21:42.004 "impl_name": "posix", 00:21:42.004 "recv_buf_size": 2097152, 00:21:42.004 "send_buf_size": 2097152, 00:21:42.004 "enable_recv_pipe": true, 00:21:42.004 "enable_quickack": false, 00:21:42.004 "enable_placement_id": 0, 00:21:42.004 "enable_zerocopy_send_server": true, 00:21:42.004 "enable_zerocopy_send_client": false, 00:21:42.004 "zerocopy_threshold": 0, 00:21:42.004 "tls_version": 0, 00:21:42.004 "enable_ktls": false 00:21:42.004 } 00:21:42.004 } 00:21:42.004 ] 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "subsystem": "vmd", 00:21:42.004 "config": [] 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "subsystem": "accel", 00:21:42.004 "config": [ 00:21:42.004 { 00:21:42.004 "method": "accel_set_options", 00:21:42.004 "params": { 00:21:42.004 "small_cache_size": 128, 00:21:42.004 "large_cache_size": 16, 00:21:42.004 "task_count": 2048, 00:21:42.004 "sequence_count": 2048, 00:21:42.004 "buf_count": 2048 00:21:42.004 } 00:21:42.004 } 00:21:42.004 ] 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "subsystem": "bdev", 00:21:42.004 "config": [ 00:21:42.004 { 00:21:42.004 "method": "bdev_set_options", 00:21:42.004 "params": { 00:21:42.004 "bdev_io_pool_size": 65535, 00:21:42.004 "bdev_io_cache_size": 256, 00:21:42.004 "bdev_auto_examine": true, 00:21:42.004 "iobuf_small_cache_size": 128, 00:21:42.004 "iobuf_large_cache_size": 16 00:21:42.004 } 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "method": "bdev_raid_set_options", 00:21:42.004 "params": { 00:21:42.004 "process_window_size_kb": 1024 00:21:42.004 } 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "method": "bdev_iscsi_set_options", 00:21:42.004 "params": { 00:21:42.004 "timeout_sec": 30 00:21:42.004 } 00:21:42.004 }, 00:21:42.004 { 00:21:42.004 "method": "bdev_nvme_set_options", 00:21:42.004 "params": { 00:21:42.004 "action_on_timeout": "none", 00:21:42.004 "timeout_us": 0, 00:21:42.004 "timeout_admin_us": 0, 00:21:42.004 "keep_alive_timeout_ms": 10000, 00:21:42.004 "arbitration_burst": 0, 00:21:42.004 "low_priority_weight": 0, 00:21:42.004 "medium_priority_weight": 0, 00:21:42.004 "high_priority_weight": 0, 00:21:42.004 "nvme_adminq_poll_period_us": 10000, 00:21:42.005 "nvme_ioq_poll_period_us": 0, 00:21:42.005 "io_queue_requests": 0, 00:21:42.005 "delay_cmd_submit": true, 00:21:42.005 "transport_retry_count": 4, 00:21:42.005 "bdev_retry_count": 3, 00:21:42.005 "transport_ack_timeout": 0, 00:21:42.005 "ctrlr_loss_timeout_sec": 0, 00:21:42.005 "reconnect_delay_sec": 0, 00:21:42.005 "fast_io_fail_timeout_sec": 0, 00:21:42.005 "disable_auto_failback": false, 00:21:42.005 "generate_uuids": false, 00:21:42.005 "transport_tos": 0, 00:21:42.005 "nvme_error_stat": false, 00:21:42.005 "rdma_srq_size": 0, 00:21:42.005 "io_path_stat": false, 00:21:42.005 "allow_accel_sequence": false, 00:21:42.005 "rdma_max_cq_size": 0, 00:21:42.005 "rdma_cm_event_timeout_ms": 0, 00:21:42.005 "dhchap_digests": [ 00:21:42.005 "sha256", 00:21:42.005 "sha384", 00:21:42.005 "sha512" 00:21:42.005 ], 00:21:42.005 "dhchap_dhgroups": [ 00:21:42.005 "null", 00:21:42.005 "ffdhe2048", 00:21:42.005 "ffdhe3072", 00:21:42.005 "ffdhe4096", 00:21:42.005 "ffdhe6144", 00:21:42.005 "ffdhe8192" 00:21:42.005 ] 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "bdev_nvme_set_hotplug", 00:21:42.005 "params": { 00:21:42.005 "period_us": 100000, 00:21:42.005 "enable": false 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "bdev_malloc_create", 00:21:42.005 "params": { 00:21:42.005 "name": "malloc0", 00:21:42.005 "num_blocks": 8192, 00:21:42.005 "block_size": 4096, 00:21:42.005 "physical_block_size": 4096, 00:21:42.005 "uuid": "e51ace55-5d68-4524-8a8d-b8c129e93513", 00:21:42.005 "optimal_io_boundary": 0 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "bdev_wait_for_examine" 00:21:42.005 } 00:21:42.005 ] 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "subsystem": "nbd", 00:21:42.005 "config": [] 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "subsystem": "scheduler", 00:21:42.005 "config": [ 00:21:42.005 { 00:21:42.005 "method": "framework_set_scheduler", 00:21:42.005 "params": { 00:21:42.005 "name": "static" 00:21:42.005 } 00:21:42.005 } 00:21:42.005 ] 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "subsystem": "nvmf", 00:21:42.005 "config": [ 00:21:42.005 { 00:21:42.005 "method": "nvmf_set_config", 00:21:42.005 "params": { 00:21:42.005 "discovery_filter": "match_any", 00:21:42.005 "admin_cmd_passthru": { 00:21:42.005 "identify_ctrlr": false 00:21:42.005 } 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "nvmf_set_max_subsystems", 00:21:42.005 "params": { 00:21:42.005 "max_subsystems": 1024 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "nvmf_set_crdt", 00:21:42.005 "params": { 00:21:42.005 "crdt1": 0, 00:21:42.005 "crdt2": 0, 00:21:42.005 "crdt3": 0 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "nvmf_create_transport", 00:21:42.005 "params": { 00:21:42.005 "trtype": "TCP", 00:21:42.005 "max_queue_depth": 128, 00:21:42.005 "max_io_qpairs_per_ctrlr": 127, 00:21:42.005 "in_capsule_data_size": 4096, 00:21:42.005 "max_io_size": 131072, 00:21:42.005 "io_unit_size": 131072, 00:21:42.005 "max_aq_depth": 128, 00:21:42.005 "num_shared_buffers": 511, 00:21:42.005 "buf_cache_size": 4294967295, 00:21:42.005 "dif_insert_or_strip": false, 00:21:42.005 "zcopy": false, 00:21:42.005 "c2h_success": false, 00:21:42.005 "sock_priority": 0, 00:21:42.005 "abort_timeout_sec": 1, 00:21:42.005 "ack_timeout": 0, 00:21:42.005 "data_wr_pool_size": 0 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "nvmf_create_subsystem", 00:21:42.005 "params": { 00:21:42.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.005 "allow_any_host": false, 00:21:42.005 "serial_number": "00000000000000000000", 00:21:42.005 "model_number": "SPDK bdev Controller", 00:21:42.005 "max_namespaces": 32, 00:21:42.005 "min_cntlid": 1, 00:21:42.005 "max_cntlid": 65519, 00:21:42.005 "ana_reporting": false 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "nvmf_subsystem_add_host", 00:21:42.005 "params": { 00:21:42.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.005 "host": "nqn.2016-06.io.spdk:host1", 00:21:42.005 "psk": "key0" 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "nvmf_subsystem_add_ns", 00:21:42.005 "params": { 00:21:42.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.005 "namespace": { 00:21:42.005 "nsid": 1, 00:21:42.005 "bdev_name": "malloc0", 00:21:42.005 "nguid": "E51ACE555D6845248A8DB8C129E93513", 00:21:42.005 "uuid": "e51ace55-5d68-4524-8a8d-b8c129e93513", 00:21:42.005 "no_auto_visible": false 00:21:42.005 } 00:21:42.005 } 00:21:42.005 }, 00:21:42.005 { 00:21:42.005 "method": "nvmf_subsystem_add_listener", 00:21:42.005 "params": { 00:21:42.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.005 "listen_address": { 00:21:42.005 "trtype": "TCP", 00:21:42.005 "adrfam": "IPv4", 00:21:42.005 "traddr": "10.0.0.2", 00:21:42.005 "trsvcid": "4420" 00:21:42.005 }, 00:21:42.005 "secure_channel": false, 00:21:42.005 "sock_impl": "ssl" 00:21:42.005 } 00:21:42.005 } 00:21:42.005 ] 00:21:42.005 } 00:21:42.005 ] 00:21:42.005 }' 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1136482 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1136482 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1136482 ']' 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.005 00:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.005 [2024-07-16 00:32:55.513835] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:42.005 [2024-07-16 00:32:55.513892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.005 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.005 [2024-07-16 00:32:55.585409] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.266 [2024-07-16 00:32:55.650267] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.266 [2024-07-16 00:32:55.650304] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.266 [2024-07-16 00:32:55.650311] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.266 [2024-07-16 00:32:55.650318] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.266 [2024-07-16 00:32:55.650323] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.266 [2024-07-16 00:32:55.650373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.266 [2024-07-16 00:32:55.847487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.266 [2024-07-16 00:32:55.879506] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.266 [2024-07-16 00:32:55.889556] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.838 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.838 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:42.838 00:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.838 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1136554 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1136554 /var/tmp/bdevperf.sock 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1136554 ']' 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.839 00:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:42.839 "subsystems": [ 00:21:42.839 { 00:21:42.839 "subsystem": "keyring", 00:21:42.839 "config": [ 00:21:42.839 { 00:21:42.839 "method": "keyring_file_add_key", 00:21:42.839 "params": { 00:21:42.839 "name": "key0", 00:21:42.839 "path": "/tmp/tmp.kOCGRAvDPS" 00:21:42.839 } 00:21:42.839 } 00:21:42.839 ] 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "subsystem": "iobuf", 00:21:42.839 "config": [ 00:21:42.839 { 00:21:42.839 "method": "iobuf_set_options", 00:21:42.839 "params": { 00:21:42.839 "small_pool_count": 8192, 00:21:42.839 "large_pool_count": 1024, 00:21:42.839 "small_bufsize": 8192, 00:21:42.839 "large_bufsize": 135168 00:21:42.839 } 00:21:42.839 } 00:21:42.839 ] 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "subsystem": "sock", 00:21:42.839 "config": [ 00:21:42.839 { 00:21:42.839 "method": "sock_set_default_impl", 00:21:42.839 "params": { 00:21:42.839 "impl_name": "posix" 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "sock_impl_set_options", 00:21:42.839 "params": { 00:21:42.839 "impl_name": "ssl", 00:21:42.839 "recv_buf_size": 4096, 00:21:42.839 "send_buf_size": 4096, 00:21:42.839 "enable_recv_pipe": true, 00:21:42.839 "enable_quickack": false, 00:21:42.839 "enable_placement_id": 0, 00:21:42.839 "enable_zerocopy_send_server": true, 00:21:42.839 "enable_zerocopy_send_client": false, 00:21:42.839 "zerocopy_threshold": 0, 00:21:42.839 "tls_version": 0, 00:21:42.839 "enable_ktls": false 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "sock_impl_set_options", 00:21:42.839 "params": { 00:21:42.839 "impl_name": "posix", 00:21:42.839 "recv_buf_size": 2097152, 00:21:42.839 "send_buf_size": 2097152, 00:21:42.839 "enable_recv_pipe": true, 00:21:42.839 "enable_quickack": false, 00:21:42.839 "enable_placement_id": 0, 00:21:42.839 "enable_zerocopy_send_server": true, 00:21:42.839 "enable_zerocopy_send_client": false, 00:21:42.839 "zerocopy_threshold": 0, 00:21:42.839 "tls_version": 0, 00:21:42.839 "enable_ktls": false 00:21:42.839 } 00:21:42.839 } 00:21:42.839 ] 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "subsystem": "vmd", 00:21:42.839 "config": [] 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "subsystem": "accel", 00:21:42.839 "config": [ 00:21:42.839 { 00:21:42.839 "method": "accel_set_options", 00:21:42.839 "params": { 00:21:42.839 "small_cache_size": 128, 00:21:42.839 "large_cache_size": 16, 00:21:42.839 "task_count": 2048, 00:21:42.839 "sequence_count": 2048, 00:21:42.839 "buf_count": 2048 00:21:42.839 } 00:21:42.839 } 00:21:42.839 ] 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "subsystem": "bdev", 00:21:42.839 "config": [ 00:21:42.839 { 00:21:42.839 "method": "bdev_set_options", 00:21:42.839 "params": { 00:21:42.839 "bdev_io_pool_size": 65535, 00:21:42.839 "bdev_io_cache_size": 256, 00:21:42.839 "bdev_auto_examine": true, 00:21:42.839 "iobuf_small_cache_size": 128, 00:21:42.839 "iobuf_large_cache_size": 16 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "bdev_raid_set_options", 00:21:42.839 "params": { 00:21:42.839 "process_window_size_kb": 1024 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "bdev_iscsi_set_options", 00:21:42.839 "params": { 00:21:42.839 "timeout_sec": 30 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "bdev_nvme_set_options", 00:21:42.839 "params": { 00:21:42.839 "action_on_timeout": "none", 00:21:42.839 "timeout_us": 0, 00:21:42.839 "timeout_admin_us": 0, 00:21:42.839 "keep_alive_timeout_ms": 10000, 00:21:42.839 "arbitration_burst": 0, 00:21:42.839 "low_priority_weight": 0, 00:21:42.839 "medium_priority_weight": 0, 00:21:42.839 "high_priority_weight": 0, 00:21:42.839 "nvme_adminq_poll_period_us": 10000, 00:21:42.839 "nvme_ioq_poll_period_us": 0, 00:21:42.839 "io_queue_requests": 512, 00:21:42.839 "delay_cmd_submit": true, 00:21:42.839 "transport_retry_count": 4, 00:21:42.839 "bdev_retry_count": 3, 00:21:42.839 "transport_ack_timeout": 0, 00:21:42.839 "ctrlr_loss_timeout_sec": 0, 00:21:42.839 "reconnect_delay_sec": 0, 00:21:42.839 "fast_io_fail_timeout_sec": 0, 00:21:42.839 "disable_auto_failback": false, 00:21:42.839 "generate_uuids": false, 00:21:42.839 "transport_tos": 0, 00:21:42.839 "nvme_error_stat": false, 00:21:42.839 "rdma_srq_size": 0, 00:21:42.839 "io_path_stat": false, 00:21:42.839 "allow_accel_sequence": false, 00:21:42.839 "rdma_max_cq_size": 0, 00:21:42.839 "rdma_cm_event_timeout_ms": 0, 00:21:42.839 "dhchap_digests": [ 00:21:42.839 "sha256", 00:21:42.839 "sha384", 00:21:42.839 "sha512" 00:21:42.839 ], 00:21:42.839 "dhchap_dhgroups": [ 00:21:42.839 "null", 00:21:42.839 "ffdhe2048", 00:21:42.839 "ffdhe3072", 00:21:42.839 "ffdhe4096", 00:21:42.839 "ffdhe6144", 00:21:42.839 "ffdhe8192" 00:21:42.839 ] 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "bdev_nvme_attach_controller", 00:21:42.839 "params": { 00:21:42.839 "name": "nvme0", 00:21:42.839 "trtype": "TCP", 00:21:42.839 "adrfam": "IPv4", 00:21:42.839 "traddr": "10.0.0.2", 00:21:42.839 "trsvcid": "4420", 00:21:42.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.839 "prchk_reftag": false, 00:21:42.839 "prchk_guard": false, 00:21:42.839 "ctrlr_loss_timeout_sec": 0, 00:21:42.839 "reconnect_delay_sec": 0, 00:21:42.839 "fast_io_fail_timeout_sec": 0, 00:21:42.839 "psk": "key0", 00:21:42.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.839 "hdgst": false, 00:21:42.839 "ddgst": false 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "bdev_nvme_set_hotplug", 00:21:42.839 "params": { 00:21:42.839 "period_us": 100000, 00:21:42.839 "enable": false 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "bdev_enable_histogram", 00:21:42.839 "params": { 00:21:42.839 "name": "nvme0n1", 00:21:42.839 "enable": true 00:21:42.839 } 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "method": "bdev_wait_for_examine" 00:21:42.839 } 00:21:42.839 ] 00:21:42.839 }, 00:21:42.839 { 00:21:42.839 "subsystem": "nbd", 00:21:42.839 "config": [] 00:21:42.839 } 00:21:42.839 ] 00:21:42.839 }' 00:21:42.839 [2024-07-16 00:32:56.358876] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:42.839 [2024-07-16 00:32:56.358928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136554 ] 00:21:42.839 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.839 [2024-07-16 00:32:56.437552] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.101 [2024-07-16 00:32:56.491511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.101 [2024-07-16 00:32:56.625461] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.673 00:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.673 00:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:43.673 00:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.673 00:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:43.673 00:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.673 00:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.933 Running I/O for 1 seconds... 00:21:44.875 00:21:44.875 Latency(us) 00:21:44.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.875 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:44.875 Verification LBA range: start 0x0 length 0x2000 00:21:44.875 nvme0n1 : 1.06 3253.00 12.71 0.00 0.00 38339.01 4532.91 63351.47 00:21:44.875 =================================================================================================================== 00:21:44.875 Total : 3253.00 12.71 0.00 0.00 38339.01 4532.91 63351.47 00:21:44.875 0 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:44.875 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:44.875 nvmf_trace.0 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1136554 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1136554 ']' 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1136554 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1136554 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1136554' 00:21:45.135 killing process with pid 1136554 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1136554 00:21:45.135 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.135 00:21:45.135 Latency(us) 00:21:45.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.135 =================================================================================================================== 00:21:45.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1136554 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.135 rmmod nvme_tcp 00:21:45.135 rmmod nvme_fabrics 00:21:45.135 rmmod nvme_keyring 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1136482 ']' 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1136482 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1136482 ']' 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1136482 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.135 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1136482 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1136482' 00:21:45.396 killing process with pid 1136482 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1136482 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1136482 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.396 00:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.942 00:33:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.942 00:33:01 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.f05TXlepel /tmp/tmp.3ffqrNdOsH /tmp/tmp.kOCGRAvDPS 00:21:47.942 00:21:47.942 real 1m24.490s 00:21:47.942 user 2m9.255s 00:21:47.942 sys 0m27.835s 00:21:47.942 00:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.942 00:33:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.942 ************************************ 00:21:47.942 END TEST nvmf_tls 00:21:47.942 ************************************ 00:21:47.942 00:33:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:47.942 00:33:01 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:47.942 00:33:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.942 00:33:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.942 00:33:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.942 ************************************ 00:21:47.942 START TEST nvmf_fips 00:21:47.942 ************************************ 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:47.942 * Looking for test storage... 00:21:47.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.942 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:47.943 Error setting digest 00:21:47.943 00E2D6F0577F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:47.943 00E2D6F0577F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.943 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.944 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.944 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.944 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.944 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.944 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.944 00:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.944 00:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:56.086 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:56.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:56.086 Found net devices under 0000:31:00.0: cvl_0_0 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:56.086 Found net devices under 0000:31:00.1: cvl_0_1 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.086 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:21:56.087 00:21:56.087 --- 10.0.0.2 ping statistics --- 00:21:56.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.087 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:21:56.087 00:21:56.087 --- 10.0.0.1 ping statistics --- 00:21:56.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.087 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.087 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.347 00:33:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:56.347 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.347 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.347 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1142339 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1142339 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1142339 ']' 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.348 00:33:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.348 [2024-07-16 00:33:09.771820] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:56.348 [2024-07-16 00:33:09.771872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.348 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.348 [2024-07-16 00:33:09.853992] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.348 [2024-07-16 00:33:09.939707] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.348 [2024-07-16 00:33:09.939764] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.348 [2024-07-16 00:33:09.939772] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.348 [2024-07-16 00:33:09.939779] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.348 [2024-07-16 00:33:09.939784] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.348 [2024-07-16 00:33:09.939808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:57.289 [2024-07-16 00:33:10.759669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.289 [2024-07-16 00:33:10.775668] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.289 [2024-07-16 00:33:10.775956] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.289 [2024-07-16 00:33:10.805809] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:57.289 malloc0 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1142640 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1142640 /var/tmp/bdevperf.sock 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1142640 ']' 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.289 00:33:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:57.289 [2024-07-16 00:33:10.909075] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:57.289 [2024-07-16 00:33:10.909155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142640 ] 00:21:57.548 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.548 [2024-07-16 00:33:10.970988] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.548 [2024-07-16 00:33:11.036038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.117 00:33:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.117 00:33:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:58.117 00:33:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:58.377 [2024-07-16 00:33:11.816333] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.377 [2024-07-16 00:33:11.816391] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:58.377 TLSTESTn1 00:21:58.377 00:33:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.377 Running I/O for 10 seconds... 00:22:10.596 00:22:10.596 Latency(us) 00:22:10.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.596 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.596 Verification LBA range: start 0x0 length 0x2000 00:22:10.596 TLSTESTn1 : 10.02 5842.05 22.82 0.00 0.00 21875.46 5543.25 51118.08 00:22:10.596 =================================================================================================================== 00:22:10.596 Total : 5842.05 22.82 0.00 0.00 21875.46 5543.25 51118.08 00:22:10.596 0 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:10.596 nvmf_trace.0 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1142640 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1142640 ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1142640 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1142640 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1142640' 00:22:10.596 killing process with pid 1142640 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1142640 00:22:10.596 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.596 00:22:10.596 Latency(us) 00:22:10.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.596 =================================================================================================================== 00:22:10.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.596 [2024-07-16 00:33:22.214891] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1142640 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.596 rmmod nvme_tcp 00:22:10.596 rmmod nvme_fabrics 00:22:10.596 rmmod nvme_keyring 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1142339 ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1142339 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1142339 ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1142339 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1142339 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1142339' 00:22:10.596 killing process with pid 1142339 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1142339 00:22:10.596 [2024-07-16 00:33:22.441392] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1142339 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.596 00:33:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.169 00:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.169 00:33:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:11.169 00:22:11.169 real 0m23.526s 00:22:11.169 user 0m23.868s 00:22:11.169 sys 0m10.354s 00:22:11.169 00:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.169 00:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:11.169 ************************************ 00:22:11.169 END TEST nvmf_fips 00:22:11.169 ************************************ 00:22:11.169 00:33:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:11.169 00:33:24 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:11.169 00:33:24 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:11.169 00:33:24 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:11.169 00:33:24 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:11.169 00:33:24 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.169 00:33:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:19.308 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:19.308 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.308 00:33:32 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:19.309 Found net devices under 0000:31:00.0: cvl_0_0 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:19.309 Found net devices under 0000:31:00.1: cvl_0_1 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:19.309 00:33:32 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:19.309 00:33:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:19.309 00:33:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:19.309 00:33:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:19.309 ************************************ 00:22:19.309 START TEST nvmf_perf_adq 00:22:19.309 ************************************ 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:19.309 * Looking for test storage... 00:22:19.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.309 00:33:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:27.441 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:27.441 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:27.441 Found net devices under 0000:31:00.0: cvl_0_0 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:27.441 Found net devices under 0000:31:00.1: cvl_0_1 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:27.441 00:33:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:28.822 00:33:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:30.732 00:33:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:36.061 00:33:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:36.062 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:36.062 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:36.062 Found net devices under 0000:31:00.0: cvl_0_0 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:36.062 Found net devices under 0000:31:00.1: cvl_0_1 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.062 00:33:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:36.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:22:36.062 00:22:36.062 --- 10.0.0.2 ping statistics --- 00:22:36.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.062 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:22:36.062 00:22:36.062 --- 10.0.0.1 ping statistics --- 00:22:36.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.062 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1155448 00:22:36.062 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1155448 00:22:36.063 00:33:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:36.063 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1155448 ']' 00:22:36.063 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.063 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.063 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.063 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.063 00:33:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.063 [2024-07-16 00:33:49.297088] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:22:36.063 [2024-07-16 00:33:49.297158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.063 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.063 [2024-07-16 00:33:49.376507] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.063 [2024-07-16 00:33:49.452973] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.063 [2024-07-16 00:33:49.453010] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.063 [2024-07-16 00:33:49.453018] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.063 [2024-07-16 00:33:49.453024] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.063 [2024-07-16 00:33:49.453030] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.063 [2024-07-16 00:33:49.453252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.063 [2024-07-16 00:33:49.453271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.063 [2024-07-16 00:33:49.453355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.063 [2024-07-16 00:33:49.453356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.635 [2024-07-16 00:33:50.255262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.635 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.896 Malloc1 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.896 [2024-07-16 00:33:50.314650] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1155747 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:36.896 00:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:36.896 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:38.808 "tick_rate": 2400000000, 00:22:38.808 "poll_groups": [ 00:22:38.808 { 00:22:38.808 "name": "nvmf_tgt_poll_group_000", 00:22:38.808 "admin_qpairs": 1, 00:22:38.808 "io_qpairs": 1, 00:22:38.808 "current_admin_qpairs": 1, 00:22:38.808 "current_io_qpairs": 1, 00:22:38.808 "pending_bdev_io": 0, 00:22:38.808 "completed_nvme_io": 20752, 00:22:38.808 "transports": [ 00:22:38.808 { 00:22:38.808 "trtype": "TCP" 00:22:38.808 } 00:22:38.808 ] 00:22:38.808 }, 00:22:38.808 { 00:22:38.808 "name": "nvmf_tgt_poll_group_001", 00:22:38.808 "admin_qpairs": 0, 00:22:38.808 "io_qpairs": 1, 00:22:38.808 "current_admin_qpairs": 0, 00:22:38.808 "current_io_qpairs": 1, 00:22:38.808 "pending_bdev_io": 0, 00:22:38.808 "completed_nvme_io": 28270, 00:22:38.808 "transports": [ 00:22:38.808 { 00:22:38.808 "trtype": "TCP" 00:22:38.808 } 00:22:38.808 ] 00:22:38.808 }, 00:22:38.808 { 00:22:38.808 "name": "nvmf_tgt_poll_group_002", 00:22:38.808 "admin_qpairs": 0, 00:22:38.808 "io_qpairs": 1, 00:22:38.808 "current_admin_qpairs": 0, 00:22:38.808 "current_io_qpairs": 1, 00:22:38.808 "pending_bdev_io": 0, 00:22:38.808 "completed_nvme_io": 23795, 00:22:38.808 "transports": [ 00:22:38.808 { 00:22:38.808 "trtype": "TCP" 00:22:38.808 } 00:22:38.808 ] 00:22:38.808 }, 00:22:38.808 { 00:22:38.808 "name": "nvmf_tgt_poll_group_003", 00:22:38.808 "admin_qpairs": 0, 00:22:38.808 "io_qpairs": 1, 00:22:38.808 "current_admin_qpairs": 0, 00:22:38.808 "current_io_qpairs": 1, 00:22:38.808 "pending_bdev_io": 0, 00:22:38.808 "completed_nvme_io": 20602, 00:22:38.808 "transports": [ 00:22:38.808 { 00:22:38.808 "trtype": "TCP" 00:22:38.808 } 00:22:38.808 ] 00:22:38.808 } 00:22:38.808 ] 00:22:38.808 }' 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:38.808 00:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1155747 00:22:46.932 Initializing NVMe Controllers 00:22:46.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:46.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:46.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:46.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:46.932 Initialization complete. Launching workers. 00:22:46.932 ======================================================== 00:22:46.932 Latency(us) 00:22:46.932 Device Information : IOPS MiB/s Average min max 00:22:46.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11514.10 44.98 5559.54 1301.35 8940.88 00:22:46.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14802.30 57.82 4323.04 1216.00 10205.48 00:22:46.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14372.90 56.14 4452.57 870.95 11272.97 00:22:46.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14437.20 56.40 4432.61 1074.12 8992.67 00:22:46.933 ======================================================== 00:22:46.933 Total : 55126.49 215.34 4643.77 870.95 11272.97 00:22:46.933 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.933 rmmod nvme_tcp 00:22:46.933 rmmod nvme_fabrics 00:22:46.933 rmmod nvme_keyring 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1155448 ']' 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1155448 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1155448 ']' 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1155448 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.933 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1155448 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1155448' 00:22:47.194 killing process with pid 1155448 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1155448 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1155448 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.194 00:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.736 00:34:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.736 00:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:49.736 00:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:51.115 00:34:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:53.027 00:34:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:58.322 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:58.322 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:58.322 Found net devices under 0000:31:00.0: cvl_0_0 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:58.322 Found net devices under 0000:31:00.1: cvl_0_1 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.322 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:22:58.323 00:22:58.323 --- 10.0.0.2 ping statistics --- 00:22:58.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.323 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:22:58.323 00:22:58.323 --- 10.0.0.1 ping statistics --- 00:22:58.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.323 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:58.323 net.core.busy_poll = 1 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:58.323 net.core.busy_read = 1 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1160226 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1160226 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1160226 ']' 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.323 00:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.584 [2024-07-16 00:34:11.967330] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:22:58.584 [2024-07-16 00:34:11.967404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.584 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.584 [2024-07-16 00:34:12.045763] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.584 [2024-07-16 00:34:12.120848] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.584 [2024-07-16 00:34:12.120890] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.584 [2024-07-16 00:34:12.120898] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.584 [2024-07-16 00:34:12.120905] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.584 [2024-07-16 00:34:12.120911] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.584 [2024-07-16 00:34:12.121061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.584 [2024-07-16 00:34:12.121198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.584 [2024-07-16 00:34:12.121356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.584 [2024-07-16 00:34:12.121472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.155 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 [2024-07-16 00:34:12.919606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 Malloc1 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.416 [2024-07-16 00:34:12.978976] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1160562 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:59.416 00:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:59.416 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.953 00:34:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:01.953 00:34:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.953 00:34:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.953 00:34:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.953 00:34:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:01.953 "tick_rate": 2400000000, 00:23:01.953 "poll_groups": [ 00:23:01.953 { 00:23:01.953 "name": "nvmf_tgt_poll_group_000", 00:23:01.953 "admin_qpairs": 1, 00:23:01.953 "io_qpairs": 2, 00:23:01.953 "current_admin_qpairs": 1, 00:23:01.953 "current_io_qpairs": 2, 00:23:01.953 "pending_bdev_io": 0, 00:23:01.953 "completed_nvme_io": 29116, 00:23:01.953 "transports": [ 00:23:01.953 { 00:23:01.953 "trtype": "TCP" 00:23:01.953 } 00:23:01.953 ] 00:23:01.953 }, 00:23:01.953 { 00:23:01.953 "name": "nvmf_tgt_poll_group_001", 00:23:01.953 "admin_qpairs": 0, 00:23:01.953 "io_qpairs": 2, 00:23:01.953 "current_admin_qpairs": 0, 00:23:01.953 "current_io_qpairs": 2, 00:23:01.953 "pending_bdev_io": 0, 00:23:01.953 "completed_nvme_io": 40630, 00:23:01.953 "transports": [ 00:23:01.953 { 00:23:01.953 "trtype": "TCP" 00:23:01.953 } 00:23:01.953 ] 00:23:01.953 }, 00:23:01.953 { 00:23:01.953 "name": "nvmf_tgt_poll_group_002", 00:23:01.953 "admin_qpairs": 0, 00:23:01.953 "io_qpairs": 0, 00:23:01.953 "current_admin_qpairs": 0, 00:23:01.953 "current_io_qpairs": 0, 00:23:01.953 "pending_bdev_io": 0, 00:23:01.953 "completed_nvme_io": 0, 00:23:01.953 "transports": [ 00:23:01.953 { 00:23:01.953 "trtype": "TCP" 00:23:01.953 } 00:23:01.953 ] 00:23:01.953 }, 00:23:01.953 { 00:23:01.953 "name": "nvmf_tgt_poll_group_003", 00:23:01.953 "admin_qpairs": 0, 00:23:01.953 "io_qpairs": 0, 00:23:01.953 "current_admin_qpairs": 0, 00:23:01.953 "current_io_qpairs": 0, 00:23:01.953 "pending_bdev_io": 0, 00:23:01.953 "completed_nvme_io": 0, 00:23:01.953 "transports": [ 00:23:01.953 { 00:23:01.953 "trtype": "TCP" 00:23:01.953 } 00:23:01.953 ] 00:23:01.953 } 00:23:01.953 ] 00:23:01.953 }' 00:23:01.953 00:34:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:01.953 00:34:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:01.953 00:34:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:01.953 00:34:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:01.953 00:34:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1160562 00:23:10.074 Initializing NVMe Controllers 00:23:10.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:10.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:10.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:10.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:10.074 Initialization complete. Launching workers. 00:23:10.074 ======================================================== 00:23:10.074 Latency(us) 00:23:10.074 Device Information : IOPS MiB/s Average min max 00:23:10.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10272.70 40.13 6250.14 1140.92 52150.33 00:23:10.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10822.50 42.28 5914.17 1260.44 49302.60 00:23:10.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9235.40 36.08 6931.28 1228.06 50834.85 00:23:10.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9828.70 38.39 6511.96 1326.69 48953.43 00:23:10.074 ======================================================== 00:23:10.074 Total : 40159.30 156.87 6380.32 1140.92 52150.33 00:23:10.074 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.074 rmmod nvme_tcp 00:23:10.074 rmmod nvme_fabrics 00:23:10.074 rmmod nvme_keyring 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1160226 ']' 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1160226 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1160226 ']' 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1160226 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1160226 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1160226' 00:23:10.074 killing process with pid 1160226 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1160226 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1160226 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.074 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.075 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.075 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.075 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.075 00:34:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.075 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.075 00:34:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.372 00:34:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.372 00:34:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:13.372 00:23:13.372 real 0m53.874s 00:23:13.372 user 2m49.776s 00:23:13.372 sys 0m11.264s 00:23:13.372 00:34:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:13.372 00:34:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.372 ************************************ 00:23:13.372 END TEST nvmf_perf_adq 00:23:13.372 ************************************ 00:23:13.372 00:34:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:13.373 00:34:26 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:13.373 00:34:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:13.373 00:34:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.373 00:34:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.373 ************************************ 00:23:13.373 START TEST nvmf_shutdown 00:23:13.373 ************************************ 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:13.373 * Looking for test storage... 00:23:13.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:13.373 ************************************ 00:23:13.373 START TEST nvmf_shutdown_tc1 00:23:13.373 ************************************ 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.373 00:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:21.523 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:21.523 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:21.523 Found net devices under 0000:31:00.0: cvl_0_0 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:21.523 Found net devices under 0000:31:00.1: cvl_0_1 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:21.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:23:21.523 00:23:21.523 --- 10.0.0.2 ping statistics --- 00:23:21.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.523 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:23:21.523 00:23:21.523 --- 10.0.0.1 ping statistics --- 00:23:21.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.523 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.523 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1167378 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1167378 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1167378 ']' 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.524 00:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.524 [2024-07-16 00:34:34.629139] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:21.524 [2024-07-16 00:34:34.629186] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.524 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.524 [2024-07-16 00:34:34.718665] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.524 [2024-07-16 00:34:34.775484] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.524 [2024-07-16 00:34:34.775530] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.524 [2024-07-16 00:34:34.775536] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.524 [2024-07-16 00:34:34.775541] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.524 [2024-07-16 00:34:34.775544] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.524 [2024-07-16 00:34:34.775653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.524 [2024-07-16 00:34:34.775813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.524 [2024-07-16 00:34:34.775971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.524 [2024-07-16 00:34:34.775973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:21.785 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.785 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:21.785 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.785 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.785 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.046 [2024-07-16 00:34:35.442506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.046 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.046 Malloc1 00:23:22.046 [2024-07-16 00:34:35.545009] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.046 Malloc2 00:23:22.046 Malloc3 00:23:22.046 Malloc4 00:23:22.046 Malloc5 00:23:22.307 Malloc6 00:23:22.307 Malloc7 00:23:22.307 Malloc8 00:23:22.307 Malloc9 00:23:22.307 Malloc10 00:23:22.307 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.307 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:22.307 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:22.307 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1167764 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1167764 /var/tmp/bdevperf.sock 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1167764 ']' 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.569 { 00:23:22.569 "params": { 00:23:22.569 "name": "Nvme$subsystem", 00:23:22.569 "trtype": "$TEST_TRANSPORT", 00:23:22.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.569 "adrfam": "ipv4", 00:23:22.569 "trsvcid": "$NVMF_PORT", 00:23:22.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.569 "hdgst": ${hdgst:-false}, 00:23:22.569 "ddgst": ${ddgst:-false} 00:23:22.569 }, 00:23:22.569 "method": "bdev_nvme_attach_controller" 00:23:22.569 } 00:23:22.569 EOF 00:23:22.569 )") 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.569 { 00:23:22.569 "params": { 00:23:22.569 "name": "Nvme$subsystem", 00:23:22.569 "trtype": "$TEST_TRANSPORT", 00:23:22.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.569 "adrfam": "ipv4", 00:23:22.569 "trsvcid": "$NVMF_PORT", 00:23:22.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.569 "hdgst": ${hdgst:-false}, 00:23:22.569 "ddgst": ${ddgst:-false} 00:23:22.569 }, 00:23:22.569 "method": "bdev_nvme_attach_controller" 00:23:22.569 } 00:23:22.569 EOF 00:23:22.569 )") 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.569 { 00:23:22.569 "params": { 00:23:22.569 "name": "Nvme$subsystem", 00:23:22.569 "trtype": "$TEST_TRANSPORT", 00:23:22.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.569 "adrfam": "ipv4", 00:23:22.569 "trsvcid": "$NVMF_PORT", 00:23:22.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.569 "hdgst": ${hdgst:-false}, 00:23:22.569 "ddgst": ${ddgst:-false} 00:23:22.569 }, 00:23:22.569 "method": "bdev_nvme_attach_controller" 00:23:22.569 } 00:23:22.569 EOF 00:23:22.569 )") 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.569 { 00:23:22.569 "params": { 00:23:22.569 "name": "Nvme$subsystem", 00:23:22.569 "trtype": "$TEST_TRANSPORT", 00:23:22.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.569 "adrfam": "ipv4", 00:23:22.569 "trsvcid": "$NVMF_PORT", 00:23:22.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.569 "hdgst": ${hdgst:-false}, 00:23:22.569 "ddgst": ${ddgst:-false} 00:23:22.569 }, 00:23:22.569 "method": "bdev_nvme_attach_controller" 00:23:22.569 } 00:23:22.569 EOF 00:23:22.569 )") 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.569 { 00:23:22.569 "params": { 00:23:22.569 "name": "Nvme$subsystem", 00:23:22.569 "trtype": "$TEST_TRANSPORT", 00:23:22.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.569 "adrfam": "ipv4", 00:23:22.569 "trsvcid": "$NVMF_PORT", 00:23:22.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.569 "hdgst": ${hdgst:-false}, 00:23:22.569 "ddgst": ${ddgst:-false} 00:23:22.569 }, 00:23:22.569 "method": "bdev_nvme_attach_controller" 00:23:22.569 } 00:23:22.569 EOF 00:23:22.569 )") 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.569 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.569 { 00:23:22.569 "params": { 00:23:22.570 "name": "Nvme$subsystem", 00:23:22.570 "trtype": "$TEST_TRANSPORT", 00:23:22.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "$NVMF_PORT", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.570 "hdgst": ${hdgst:-false}, 00:23:22.570 "ddgst": ${ddgst:-false} 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 } 00:23:22.570 EOF 00:23:22.570 )") 00:23:22.570 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.570 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.570 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.570 { 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme$subsystem", 00:23:22.570 "trtype": "$TEST_TRANSPORT", 00:23:22.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "$NVMF_PORT", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.570 "hdgst": ${hdgst:-false}, 00:23:22.570 "ddgst": ${ddgst:-false} 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 } 00:23:22.570 EOF 00:23:22.570 )") 00:23:22.570 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.570 [2024-07-16 00:34:35.994317] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:22.570 [2024-07-16 00:34:35.994393] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:22.570 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.570 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.570 { 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme$subsystem", 00:23:22.570 "trtype": "$TEST_TRANSPORT", 00:23:22.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "$NVMF_PORT", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.570 "hdgst": ${hdgst:-false}, 00:23:22.570 "ddgst": ${ddgst:-false} 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 } 00:23:22.570 EOF 00:23:22.570 )") 00:23:22.570 00:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.570 { 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme$subsystem", 00:23:22.570 "trtype": "$TEST_TRANSPORT", 00:23:22.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "$NVMF_PORT", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.570 "hdgst": ${hdgst:-false}, 00:23:22.570 "ddgst": ${ddgst:-false} 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 } 00:23:22.570 EOF 00:23:22.570 )") 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.570 { 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme$subsystem", 00:23:22.570 "trtype": "$TEST_TRANSPORT", 00:23:22.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "$NVMF_PORT", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.570 "hdgst": ${hdgst:-false}, 00:23:22.570 "ddgst": ${ddgst:-false} 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 } 00:23:22.570 EOF 00:23:22.570 )") 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:22.570 00:34:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme1", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme2", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme3", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme4", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme5", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme6", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme7", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme8", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme9", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 },{ 00:23:22.570 "params": { 00:23:22.570 "name": "Nvme10", 00:23:22.570 "trtype": "tcp", 00:23:22.570 "traddr": "10.0.0.2", 00:23:22.570 "adrfam": "ipv4", 00:23:22.570 "trsvcid": "4420", 00:23:22.570 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:22.570 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:22.570 "hdgst": false, 00:23:22.570 "ddgst": false 00:23:22.570 }, 00:23:22.570 "method": "bdev_nvme_attach_controller" 00:23:22.570 }' 00:23:22.570 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.570 [2024-07-16 00:34:36.063303] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.570 [2024-07-16 00:34:36.127797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1167764 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:23.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1167764 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:23.959 00:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1167378 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.901 { 00:23:24.901 "params": { 00:23:24.901 "name": "Nvme$subsystem", 00:23:24.901 "trtype": "$TEST_TRANSPORT", 00:23:24.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.901 "adrfam": "ipv4", 00:23:24.901 "trsvcid": "$NVMF_PORT", 00:23:24.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.901 "hdgst": ${hdgst:-false}, 00:23:24.901 "ddgst": ${ddgst:-false} 00:23:24.901 }, 00:23:24.901 "method": "bdev_nvme_attach_controller" 00:23:24.901 } 00:23:24.901 EOF 00:23:24.901 )") 00:23:24.901 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.163 { 00:23:25.163 "params": { 00:23:25.163 "name": "Nvme$subsystem", 00:23:25.163 "trtype": "$TEST_TRANSPORT", 00:23:25.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.163 "adrfam": "ipv4", 00:23:25.163 "trsvcid": "$NVMF_PORT", 00:23:25.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.163 "hdgst": ${hdgst:-false}, 00:23:25.163 "ddgst": ${ddgst:-false} 00:23:25.163 }, 00:23:25.163 "method": "bdev_nvme_attach_controller" 00:23:25.163 } 00:23:25.163 EOF 00:23:25.163 )") 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.163 { 00:23:25.163 "params": { 00:23:25.163 "name": "Nvme$subsystem", 00:23:25.163 "trtype": "$TEST_TRANSPORT", 00:23:25.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.163 "adrfam": "ipv4", 00:23:25.163 "trsvcid": "$NVMF_PORT", 00:23:25.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.163 "hdgst": ${hdgst:-false}, 00:23:25.163 "ddgst": ${ddgst:-false} 00:23:25.163 }, 00:23:25.163 "method": "bdev_nvme_attach_controller" 00:23:25.163 } 00:23:25.163 EOF 00:23:25.163 )") 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.163 { 00:23:25.163 "params": { 00:23:25.163 "name": "Nvme$subsystem", 00:23:25.163 "trtype": "$TEST_TRANSPORT", 00:23:25.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.163 "adrfam": "ipv4", 00:23:25.163 "trsvcid": "$NVMF_PORT", 00:23:25.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.163 "hdgst": ${hdgst:-false}, 00:23:25.163 "ddgst": ${ddgst:-false} 00:23:25.163 }, 00:23:25.163 "method": "bdev_nvme_attach_controller" 00:23:25.163 } 00:23:25.163 EOF 00:23:25.163 )") 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.163 { 00:23:25.163 "params": { 00:23:25.163 "name": "Nvme$subsystem", 00:23:25.163 "trtype": "$TEST_TRANSPORT", 00:23:25.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.163 "adrfam": "ipv4", 00:23:25.163 "trsvcid": "$NVMF_PORT", 00:23:25.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.163 "hdgst": ${hdgst:-false}, 00:23:25.163 "ddgst": ${ddgst:-false} 00:23:25.163 }, 00:23:25.163 "method": "bdev_nvme_attach_controller" 00:23:25.163 } 00:23:25.163 EOF 00:23:25.163 )") 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.163 { 00:23:25.163 "params": { 00:23:25.163 "name": "Nvme$subsystem", 00:23:25.163 "trtype": "$TEST_TRANSPORT", 00:23:25.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.163 "adrfam": "ipv4", 00:23:25.163 "trsvcid": "$NVMF_PORT", 00:23:25.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.163 "hdgst": ${hdgst:-false}, 00:23:25.163 "ddgst": ${ddgst:-false} 00:23:25.163 }, 00:23:25.163 "method": "bdev_nvme_attach_controller" 00:23:25.163 } 00:23:25.163 EOF 00:23:25.163 )") 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.163 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.164 [2024-07-16 00:34:38.573218] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:25.164 [2024-07-16 00:34:38.573277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168303 ] 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.164 { 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme$subsystem", 00:23:25.164 "trtype": "$TEST_TRANSPORT", 00:23:25.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "$NVMF_PORT", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.164 "hdgst": ${hdgst:-false}, 00:23:25.164 "ddgst": ${ddgst:-false} 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 } 00:23:25.164 EOF 00:23:25.164 )") 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.164 { 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme$subsystem", 00:23:25.164 "trtype": "$TEST_TRANSPORT", 00:23:25.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "$NVMF_PORT", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.164 "hdgst": ${hdgst:-false}, 00:23:25.164 "ddgst": ${ddgst:-false} 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 } 00:23:25.164 EOF 00:23:25.164 )") 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.164 { 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme$subsystem", 00:23:25.164 "trtype": "$TEST_TRANSPORT", 00:23:25.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "$NVMF_PORT", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.164 "hdgst": ${hdgst:-false}, 00:23:25.164 "ddgst": ${ddgst:-false} 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 } 00:23:25.164 EOF 00:23:25.164 )") 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.164 { 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme$subsystem", 00:23:25.164 "trtype": "$TEST_TRANSPORT", 00:23:25.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "$NVMF_PORT", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.164 "hdgst": ${hdgst:-false}, 00:23:25.164 "ddgst": ${ddgst:-false} 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 } 00:23:25.164 EOF 00:23:25.164 )") 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:25.164 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:25.164 00:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme1", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme2", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme3", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme4", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme5", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme6", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme7", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme8", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme9", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 },{ 00:23:25.164 "params": { 00:23:25.164 "name": "Nvme10", 00:23:25.164 "trtype": "tcp", 00:23:25.164 "traddr": "10.0.0.2", 00:23:25.164 "adrfam": "ipv4", 00:23:25.164 "trsvcid": "4420", 00:23:25.164 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:25.164 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:25.164 "hdgst": false, 00:23:25.164 "ddgst": false 00:23:25.164 }, 00:23:25.164 "method": "bdev_nvme_attach_controller" 00:23:25.164 }' 00:23:25.164 [2024-07-16 00:34:38.639744] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.164 [2024-07-16 00:34:38.705129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.623 Running I/O for 1 seconds... 00:23:27.563 00:23:27.563 Latency(us) 00:23:27.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.563 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme1n1 : 1.06 241.74 15.11 0.00 0.00 261901.65 21626.88 246415.36 00:23:27.563 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme2n1 : 1.05 182.38 11.40 0.00 0.00 340785.78 21517.65 277872.64 00:23:27.563 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme3n1 : 1.18 271.54 16.97 0.00 0.00 225479.68 10977.28 258648.75 00:23:27.563 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme4n1 : 1.15 223.43 13.96 0.00 0.00 268919.04 19333.12 255153.49 00:23:27.563 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme5n1 : 1.12 232.71 14.54 0.00 0.00 251238.77 8847.36 244667.73 00:23:27.563 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme6n1 : 1.17 217.89 13.62 0.00 0.00 266351.57 18459.31 253405.87 00:23:27.563 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme7n1 : 1.19 269.28 16.83 0.00 0.00 211861.33 21845.33 249910.61 00:23:27.563 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme8n1 : 1.19 273.21 17.08 0.00 0.00 204652.76 3413.33 242920.11 00:23:27.563 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme9n1 : 1.20 265.80 16.61 0.00 0.00 207301.80 15510.19 290106.03 00:23:27.563 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.563 Verification LBA range: start 0x0 length 0x400 00:23:27.563 Nvme10n1 : 1.19 268.05 16.75 0.00 0.00 201405.44 14636.37 223696.21 00:23:27.563 =================================================================================================================== 00:23:27.563 Total : 2446.03 152.88 0.00 0.00 237915.01 3413.33 290106.03 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.822 rmmod nvme_tcp 00:23:27.822 rmmod nvme_fabrics 00:23:27.822 rmmod nvme_keyring 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1167378 ']' 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1167378 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1167378 ']' 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1167378 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1167378 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1167378' 00:23:27.822 killing process with pid 1167378 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1167378 00:23:27.822 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1167378 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.082 00:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.625 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.625 00:23:30.625 real 0m16.961s 00:23:30.625 user 0m33.682s 00:23:30.625 sys 0m6.868s 00:23:30.625 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.625 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.625 ************************************ 00:23:30.625 END TEST nvmf_shutdown_tc1 00:23:30.626 ************************************ 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:30.626 ************************************ 00:23:30.626 START TEST nvmf_shutdown_tc2 00:23:30.626 ************************************ 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:30.626 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:30.626 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:30.626 Found net devices under 0000:31:00.0: cvl_0_0 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:30.626 Found net devices under 0000:31:00.1: cvl_0_1 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:30.626 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:30.627 00:34:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:30.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:23:30.627 00:23:30.627 --- 10.0.0.2 ping statistics --- 00:23:30.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.627 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:23:30.627 00:23:30.627 --- 10.0.0.1 ping statistics --- 00:23:30.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.627 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1169568 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1169568 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1169568 ']' 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.627 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.627 [2024-07-16 00:34:44.224439] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:30.627 [2024-07-16 00:34:44.224503] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.887 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.887 [2024-07-16 00:34:44.317570] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.887 [2024-07-16 00:34:44.378487] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.887 [2024-07-16 00:34:44.378521] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.887 [2024-07-16 00:34:44.378527] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.887 [2024-07-16 00:34:44.378532] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.887 [2024-07-16 00:34:44.378536] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.887 [2024-07-16 00:34:44.378644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.887 [2024-07-16 00:34:44.378807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.887 [2024-07-16 00:34:44.378968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.887 [2024-07-16 00:34:44.378971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:31.481 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.481 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:31.481 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.481 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:31.481 00:34:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.481 [2024-07-16 00:34:45.042441] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.481 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.741 Malloc1 00:23:31.741 [2024-07-16 00:34:45.141283] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.741 Malloc2 00:23:31.741 Malloc3 00:23:31.741 Malloc4 00:23:31.741 Malloc5 00:23:31.741 Malloc6 00:23:31.741 Malloc7 00:23:32.031 Malloc8 00:23:32.031 Malloc9 00:23:32.031 Malloc10 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1169807 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1169807 /var/tmp/bdevperf.sock 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1169807 ']' 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 [2024-07-16 00:34:45.595518] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:32.031 [2024-07-16 00:34:45.595570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169807 ] 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:32.031 { 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme$subsystem", 00:23:32.031 "trtype": "$TEST_TRANSPORT", 00:23:32.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "$NVMF_PORT", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:32.031 "hdgst": ${hdgst:-false}, 00:23:32.031 "ddgst": ${ddgst:-false} 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 } 00:23:32.031 EOF 00:23:32.031 )") 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:32.031 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:32.031 00:34:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme1", 00:23:32.031 "trtype": "tcp", 00:23:32.031 "traddr": "10.0.0.2", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "4420", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.031 "hdgst": false, 00:23:32.031 "ddgst": false 00:23:32.031 }, 00:23:32.031 "method": "bdev_nvme_attach_controller" 00:23:32.031 },{ 00:23:32.031 "params": { 00:23:32.031 "name": "Nvme2", 00:23:32.031 "trtype": "tcp", 00:23:32.031 "traddr": "10.0.0.2", 00:23:32.031 "adrfam": "ipv4", 00:23:32.031 "trsvcid": "4420", 00:23:32.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:32.031 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:32.031 "hdgst": false, 00:23:32.031 "ddgst": false 00:23:32.031 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme3", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme4", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme5", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme6", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme7", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme8", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme9", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 },{ 00:23:32.032 "params": { 00:23:32.032 "name": "Nvme10", 00:23:32.032 "trtype": "tcp", 00:23:32.032 "traddr": "10.0.0.2", 00:23:32.032 "adrfam": "ipv4", 00:23:32.032 "trsvcid": "4420", 00:23:32.032 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:32.032 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:32.032 "hdgst": false, 00:23:32.032 "ddgst": false 00:23:32.032 }, 00:23:32.032 "method": "bdev_nvme_attach_controller" 00:23:32.032 }' 00:23:32.292 [2024-07-16 00:34:45.662456] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.292 [2024-07-16 00:34:45.727377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.675 Running I/O for 10 seconds... 00:23:33.675 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.675 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:33.675 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:33.675 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.675 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:33.935 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:34.195 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:34.195 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.195 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.195 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.196 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.196 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.457 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.457 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:34.457 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:34.457 00:34:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1169807 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1169807 ']' 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1169807 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1169807 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1169807' 00:23:34.718 killing process with pid 1169807 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1169807 00:23:34.718 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1169807 00:23:34.718 Received shutdown signal, test time was about 0.986640 seconds 00:23:34.718 00:23:34.718 Latency(us) 00:23:34.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.718 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme1n1 : 0.96 265.32 16.58 0.00 0.00 238368.21 15728.64 253405.87 00:23:34.718 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme2n1 : 0.98 261.60 16.35 0.00 0.00 236822.40 15400.96 249910.61 00:23:34.718 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme3n1 : 0.94 203.51 12.72 0.00 0.00 297846.33 21845.33 276125.01 00:23:34.718 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme4n1 : 0.97 262.77 16.42 0.00 0.00 226184.32 23265.28 267386.88 00:23:34.718 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme5n1 : 0.98 262.21 16.39 0.00 0.00 221800.11 25012.91 246415.36 00:23:34.718 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme6n1 : 0.97 198.78 12.42 0.00 0.00 284542.29 39540.05 253405.87 00:23:34.718 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme7n1 : 0.96 200.31 12.52 0.00 0.00 276850.92 20206.93 274377.39 00:23:34.718 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme8n1 : 0.95 206.66 12.92 0.00 0.00 260417.79 2962.77 248162.99 00:23:34.718 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme9n1 : 0.97 264.77 16.55 0.00 0.00 199418.45 35170.99 251658.24 00:23:34.718 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.718 Verification LBA range: start 0x0 length 0x400 00:23:34.718 Nvme10n1 : 0.99 256.66 16.04 0.00 0.00 201904.41 18786.99 244667.73 00:23:34.718 =================================================================================================================== 00:23:34.718 Total : 2382.58 148.91 0.00 0.00 240564.63 2962.77 276125.01 00:23:34.979 00:34:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1169568 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.921 rmmod nvme_tcp 00:23:35.921 rmmod nvme_fabrics 00:23:35.921 rmmod nvme_keyring 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1169568 ']' 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1169568 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1169568 ']' 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1169568 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.921 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1169568 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1169568' 00:23:36.182 killing process with pid 1169568 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1169568 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1169568 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.182 00:34:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.733 00:23:38.733 real 0m8.066s 00:23:38.733 user 0m24.605s 00:23:38.733 sys 0m1.233s 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.733 ************************************ 00:23:38.733 END TEST nvmf_shutdown_tc2 00:23:38.733 ************************************ 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:38.733 ************************************ 00:23:38.733 START TEST nvmf_shutdown_tc3 00:23:38.733 ************************************ 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:38.733 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:38.733 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:38.733 Found net devices under 0000:31:00.0: cvl_0_0 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:38.733 Found net devices under 0000:31:00.1: cvl_0_1 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.733 00:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.733 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:23:38.734 00:23:38.734 --- 10.0.0.2 ping statistics --- 00:23:38.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.734 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:23:38.734 00:23:38.734 --- 10.0.0.1 ping statistics --- 00:23:38.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.734 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1171175 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1171175 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1171175 ']' 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.734 00:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.996 [2024-07-16 00:34:52.415151] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:38.996 [2024-07-16 00:34:52.415212] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.996 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.996 [2024-07-16 00:34:52.510496] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.996 [2024-07-16 00:34:52.579189] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.996 [2024-07-16 00:34:52.579228] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.996 [2024-07-16 00:34:52.579239] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.996 [2024-07-16 00:34:52.579244] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.996 [2024-07-16 00:34:52.579248] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.996 [2024-07-16 00:34:52.579382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.996 [2024-07-16 00:34:52.579647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.996 [2024-07-16 00:34:52.579800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.996 [2024-07-16 00:34:52.579800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.568 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.568 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:39.568 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.568 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.568 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.827 [2024-07-16 00:34:53.231568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.827 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.827 Malloc1 00:23:39.827 [2024-07-16 00:34:53.330484] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.827 Malloc2 00:23:39.827 Malloc3 00:23:39.827 Malloc4 00:23:40.087 Malloc5 00:23:40.087 Malloc6 00:23:40.087 Malloc7 00:23:40.087 Malloc8 00:23:40.087 Malloc9 00:23:40.087 Malloc10 00:23:40.087 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.087 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:40.087 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.087 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1171473 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1171473 /var/tmp/bdevperf.sock 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1171473 ']' 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 [2024-07-16 00:34:53.771867] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:40.348 [2024-07-16 00:34:53.771921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171473 ] 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.348 "trsvcid": "$NVMF_PORT", 00:23:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.348 "hdgst": ${hdgst:-false}, 00:23:40.348 "ddgst": ${ddgst:-false} 00:23:40.348 }, 00:23:40.348 "method": "bdev_nvme_attach_controller" 00:23:40.348 } 00:23:40.348 EOF 00:23:40.348 )") 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.348 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.348 { 00:23:40.348 "params": { 00:23:40.348 "name": "Nvme$subsystem", 00:23:40.348 "trtype": "$TEST_TRANSPORT", 00:23:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.348 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "$NVMF_PORT", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.349 "hdgst": ${hdgst:-false}, 00:23:40.349 "ddgst": ${ddgst:-false} 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 } 00:23:40.349 EOF 00:23:40.349 )") 00:23:40.349 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.349 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.349 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.349 { 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme$subsystem", 00:23:40.349 "trtype": "$TEST_TRANSPORT", 00:23:40.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "$NVMF_PORT", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.349 "hdgst": ${hdgst:-false}, 00:23:40.349 "ddgst": ${ddgst:-false} 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 } 00:23:40.349 EOF 00:23:40.349 )") 00:23:40.349 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:40.349 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.349 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:40.349 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:40.349 00:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme1", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme2", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme3", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme4", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme5", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme6", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme7", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme8", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme9", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 },{ 00:23:40.349 "params": { 00:23:40.349 "name": "Nvme10", 00:23:40.349 "trtype": "tcp", 00:23:40.349 "traddr": "10.0.0.2", 00:23:40.349 "adrfam": "ipv4", 00:23:40.349 "trsvcid": "4420", 00:23:40.349 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.349 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.349 "hdgst": false, 00:23:40.349 "ddgst": false 00:23:40.349 }, 00:23:40.349 "method": "bdev_nvme_attach_controller" 00:23:40.349 }' 00:23:40.349 [2024-07-16 00:34:53.838658] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.349 [2024-07-16 00:34:53.903284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.734 Running I/O for 10 seconds... 00:23:41.734 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.735 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:41.735 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:41.735 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.735 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:41.996 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:42.256 00:34:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:42.517 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:42.517 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:42.517 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:42.517 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:42.517 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.517 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1171175 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1171175 ']' 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1171175 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1171175 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1171175' 00:23:42.795 killing process with pid 1171175 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1171175 00:23:42.795 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1171175 00:23:42.795 [2024-07-16 00:34:56.242552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.242849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c61f0 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.795 [2024-07-16 00:34:56.243761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.243953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c780 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.796 [2024-07-16 00:34:56.244963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.244999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.245044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66d0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.246405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6bb0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.797 [2024-07-16 00:34:56.247150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.247409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70b0 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.798 [2024-07-16 00:34:56.248397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.248526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7a90 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7f70 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7f70 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7f70 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7f70 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7f70 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.249733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.250181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21946d0 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.250305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8a50 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.250387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f50 is same with the state(5) to be set 00:23:42.799 [2024-07-16 00:34:56.250476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.799 [2024-07-16 00:34:56.250521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.799 [2024-07-16 00:34:56.250528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231fd40 is same with the state(5) to be set 00:23:42.800 [2024-07-16 00:34:56.250564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb000 is same with the state(5) to be set 00:23:42.800 [2024-07-16 00:34:56.250648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2343890 is same with the state(5) to be set 00:23:42.800 [2024-07-16 00:34:56.250750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2360110 is same with the state(5) to be set 00:23:42.800 [2024-07-16 00:34:56.250831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97610 is same with the state(5) to be set 00:23:42.800 [2024-07-16 00:34:56.250913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.800 [2024-07-16 00:34:56.250967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.250974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d08e0 is same with the state(5) to be set 00:23:42.800 [2024-07-16 00:34:56.251568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.800 [2024-07-16 00:34:56.251919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.800 [2024-07-16 00:34:56.251927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.251935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.251942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.251952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.251959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.251968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.251976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.251986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.251993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.801 [2024-07-16 00:34:56.252630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.801 [2024-07-16 00:34:56.252639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252717] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22ae170 was disconnected and freed. reset controller. 00:23:42.802 [2024-07-16 00:34:56.252795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.252983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.252990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.253403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.802 [2024-07-16 00:34:56.253459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.802 [2024-07-16 00:34:56.261114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.802 [2024-07-16 00:34:56.261190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.261415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8930 is same with the state(5) to be set 00:23:42.803 [2024-07-16 00:34:56.270946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.270993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.803 [2024-07-16 00:34:56.271262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.803 [2024-07-16 00:34:56.271271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.271507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271575] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22af660 was disconnected and freed. reset controller. 00:23:42.804 [2024-07-16 00:34:56.271749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21946d0 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b8a50 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d0f50 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231fd40 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.804 [2024-07-16 00:34:56.271845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.804 [2024-07-16 00:34:56.271861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.804 [2024-07-16 00:34:56.271879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.804 [2024-07-16 00:34:56.271895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.271903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f6c0 is same with the state(5) to be set 00:23:42.804 [2024-07-16 00:34:56.271920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb000 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2343890 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2360110 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97610 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.271976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d08e0 (9): Bad file descriptor 00:23:42.804 [2024-07-16 00:34:56.272204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.804 [2024-07-16 00:34:56.272576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.804 [2024-07-16 00:34:56.272586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.272988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.272998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.805 [2024-07-16 00:34:56.273313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.805 [2024-07-16 00:34:56.273320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.273329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.273337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.273389] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218ea30 was disconnected and freed. reset controller. 00:23:42.806 [2024-07-16 00:34:56.277534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:42.806 [2024-07-16 00:34:56.277564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:42.806 [2024-07-16 00:34:56.278402] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.806 [2024-07-16 00:34:56.278428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:42.806 [2024-07-16 00:34:56.278751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.806 [2024-07-16 00:34:56.278767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2343890 with addr=10.0.0.2, port=4420 00:23:42.806 [2024-07-16 00:34:56.278778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2343890 is same with the state(5) to be set 00:23:42.806 [2024-07-16 00:34:56.279130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.806 [2024-07-16 00:34:56.279141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231fd40 with addr=10.0.0.2, port=4420 00:23:42.806 [2024-07-16 00:34:56.279148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231fd40 is same with the state(5) to be set 00:23:42.806 [2024-07-16 00:34:56.279179] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.806 [2024-07-16 00:34:56.279218] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.806 [2024-07-16 00:34:56.279265] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.806 [2024-07-16 00:34:56.279309] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.806 [2024-07-16 00:34:56.279605] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.806 [2024-07-16 00:34:56.279663] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.806 [2024-07-16 00:34:56.279974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.806 [2024-07-16 00:34:56.279993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d08e0 with addr=10.0.0.2, port=4420 00:23:42.806 [2024-07-16 00:34:56.280001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d08e0 is same with the state(5) to be set 00:23:42.806 [2024-07-16 00:34:56.280014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2343890 (9): Bad file descriptor 00:23:42.806 [2024-07-16 00:34:56.280025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231fd40 (9): Bad file descriptor 00:23:42.806 [2024-07-16 00:34:56.280062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.806 [2024-07-16 00:34:56.280655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.806 [2024-07-16 00:34:56.280663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.280983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.280992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.807 [2024-07-16 00:34:56.281214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.807 [2024-07-16 00:34:56.281223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235cc80 is same with the state(5) to be set 00:23:42.807 [2024-07-16 00:34:56.281279] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x235cc80 was disconnected and freed. reset controller. 00:23:42.807 [2024-07-16 00:34:56.281348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d08e0 (9): Bad file descriptor 00:23:42.807 [2024-07-16 00:34:56.281360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:42.807 [2024-07-16 00:34:56.281367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:42.807 [2024-07-16 00:34:56.281376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:42.807 [2024-07-16 00:34:56.281389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:42.807 [2024-07-16 00:34:56.281396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:42.807 [2024-07-16 00:34:56.281403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:42.807 [2024-07-16 00:34:56.282668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.807 [2024-07-16 00:34:56.282681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.808 [2024-07-16 00:34:56.282688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:42.808 [2024-07-16 00:34:56.282705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:42.808 [2024-07-16 00:34:56.282713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:42.808 [2024-07-16 00:34:56.282721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:42.808 [2024-07-16 00:34:56.282750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f6c0 (9): Bad file descriptor 00:23:42.808 [2024-07-16 00:34:56.282826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.808 [2024-07-16 00:34:56.283270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.808 [2024-07-16 00:34:56.283295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b8a50 with addr=10.0.0.2, port=4420 00:23:42.808 [2024-07-16 00:34:56.283304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8a50 is same with the state(5) to be set 00:23:42.808 [2024-07-16 00:34:56.283330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.808 [2024-07-16 00:34:56.283981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.808 [2024-07-16 00:34:56.283991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.284252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.284260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235a3a0 is same with the state(5) to be set 00:23:42.809 [2024-07-16 00:34:56.285480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.809 [2024-07-16 00:34:56.285961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.809 [2024-07-16 00:34:56.285968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.285978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.285986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.285995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.286597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.286604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235b7b0 is same with the state(5) to be set 00:23:42.810 [2024-07-16 00:34:56.288128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.288142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.288154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.288162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.288173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.288181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.288191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.288198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.288209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.288216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.288226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.810 [2024-07-16 00:34:56.288239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.810 [2024-07-16 00:34:56.288249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.811 [2024-07-16 00:34:56.288887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.811 [2024-07-16 00:34:56.288896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.288903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.288913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.288921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.288931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.288938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.288950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.288958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.288967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.288975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.288984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.288992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.289260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.289269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235e1e0 is same with the state(5) to be set 00:23:42.812 [2024-07-16 00:34:56.290532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.812 [2024-07-16 00:34:56.290917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.812 [2024-07-16 00:34:56.290927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.290935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.290944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.290952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.290962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.290971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.290982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.290990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.813 [2024-07-16 00:34:56.291669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.813 [2024-07-16 00:34:56.291677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218ff20 is same with the state(5) to be set 00:23:42.814 [2024-07-16 00:34:56.292961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.292975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.292987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.292994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.814 [2024-07-16 00:34:56.293717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.814 [2024-07-16 00:34:56.293726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.293985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.293992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.294001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.294009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.294019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.294027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.294036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.294043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.294053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.294060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.294070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.294078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.294087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.294094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.294103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f10 is same with the state(5) to be set 00:23:42.815 [2024-07-16 00:34:56.297725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:42.815 [2024-07-16 00:34:56.297754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:42.815 [2024-07-16 00:34:56.297764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:42.815 [2024-07-16 00:34:56.297774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:42.815 [2024-07-16 00:34:56.297816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b8a50 (9): Bad file descriptor 00:23:42.815 [2024-07-16 00:34:56.297871] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.815 [2024-07-16 00:34:56.297887] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.815 [2024-07-16 00:34:56.297965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:42.815 [2024-07-16 00:34:56.298411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.815 [2024-07-16 00:34:56.298426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21946d0 with addr=10.0.0.2, port=4420 00:23:42.815 [2024-07-16 00:34:56.298434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21946d0 is same with the state(5) to be set 00:23:42.815 [2024-07-16 00:34:56.298818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.815 [2024-07-16 00:34:56.298829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2360110 with addr=10.0.0.2, port=4420 00:23:42.815 [2024-07-16 00:34:56.298836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2360110 is same with the state(5) to be set 00:23:42.815 [2024-07-16 00:34:56.299054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.815 [2024-07-16 00:34:56.299065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d0f50 with addr=10.0.0.2, port=4420 00:23:42.815 [2024-07-16 00:34:56.299072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f50 is same with the state(5) to be set 00:23:42.815 [2024-07-16 00:34:56.299443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.815 [2024-07-16 00:34:56.299454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97610 with addr=10.0.0.2, port=4420 00:23:42.815 [2024-07-16 00:34:56.299462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97610 is same with the state(5) to be set 00:23:42.815 [2024-07-16 00:34:56.299469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:42.815 [2024-07-16 00:34:56.299476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:42.815 [2024-07-16 00:34:56.299484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:42.815 [2024-07-16 00:34:56.300558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.300571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.300582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.300590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.300600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.300608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.300617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.300625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.300634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.300641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.300651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.300658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.815 [2024-07-16 00:34:56.300671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.815 [2024-07-16 00:34:56.300679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.300985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.300994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.816 [2024-07-16 00:34:56.301401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.816 [2024-07-16 00:34:56.301409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.817 [2024-07-16 00:34:56.301669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.817 [2024-07-16 00:34:56.301678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0a20 is same with the state(5) to be set 00:23:42.817 [2024-07-16 00:34:56.303391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:42.817 [2024-07-16 00:34:56.303414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:42.817 [2024-07-16 00:34:56.303423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:42.817 [2024-07-16 00:34:56.303432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.817 task offset: 24576 on job bdev=Nvme7n1 fails 00:23:42.817 00:23:42.817 Latency(us) 00:23:42.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.817 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme1n1 : 0.96 144.96 9.06 55.27 0.00 315713.42 40195.41 248162.99 00:23:42.817 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme2n1 : 0.96 143.56 8.97 66.58 0.00 295240.22 22828.37 284863.15 00:23:42.817 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme3n1 ended in about 0.96 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme3n1 : 0.96 200.81 12.55 66.94 0.00 226858.13 13052.59 251658.24 00:23:42.817 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme4n1 ended in about 0.96 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme4n1 : 0.96 132.79 8.30 66.39 0.00 298905.32 19660.80 260396.37 00:23:42.817 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme5n1 : 0.95 205.07 12.82 67.31 0.00 213581.36 24903.68 262144.00 00:23:42.817 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme6n1 ended in about 0.97 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme6n1 : 0.97 198.69 12.42 66.23 0.00 215275.09 22063.79 291853.65 00:23:42.817 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme7n1 ended in about 0.95 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme7n1 : 0.95 202.50 12.66 67.50 0.00 205972.27 21080.75 242920.11 00:23:42.817 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme8n1 ended in about 0.95 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme8n1 : 0.95 206.47 12.90 67.42 0.00 198456.87 7591.25 270882.13 00:23:42.817 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme9n1 ended in about 0.98 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme9n1 : 0.98 196.66 12.29 65.55 0.00 203558.40 19005.44 218453.33 00:23:42.817 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.817 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:42.817 Verification LBA range: start 0x0 length 0x400 00:23:42.817 Nvme10n1 : 0.97 132.13 8.26 66.06 0.00 262693.83 20971.52 255153.49 00:23:42.817 =================================================================================================================== 00:23:42.817 Total : 1763.63 110.23 655.25 0.00 238270.08 7591.25 291853.65 00:23:42.817 [2024-07-16 00:34:56.328915] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:42.817 [2024-07-16 00:34:56.328945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:42.817 [2024-07-16 00:34:56.329396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.817 [2024-07-16 00:34:56.329412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb000 with addr=10.0.0.2, port=4420 00:23:42.817 [2024-07-16 00:34:56.329421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb000 is same with the state(5) to be set 00:23:42.817 [2024-07-16 00:34:56.329434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21946d0 (9): Bad file descriptor 00:23:42.817 [2024-07-16 00:34:56.329445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2360110 (9): Bad file descriptor 00:23:42.817 [2024-07-16 00:34:56.329454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d0f50 (9): Bad file descriptor 00:23:42.817 [2024-07-16 00:34:56.329464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97610 (9): Bad file descriptor 00:23:42.817 [2024-07-16 00:34:56.329932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.817 [2024-07-16 00:34:56.329947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231fd40 with addr=10.0.0.2, port=4420 00:23:42.817 [2024-07-16 00:34:56.329960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231fd40 is same with the state(5) to be set 00:23:42.817 [2024-07-16 00:34:56.330376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.817 [2024-07-16 00:34:56.330387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2343890 with addr=10.0.0.2, port=4420 00:23:42.817 [2024-07-16 00:34:56.330394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2343890 is same with the state(5) to be set 00:23:42.817 [2024-07-16 00:34:56.330592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.817 [2024-07-16 00:34:56.330602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d08e0 with addr=10.0.0.2, port=4420 00:23:42.817 [2024-07-16 00:34:56.330610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d08e0 is same with the state(5) to be set 00:23:42.817 [2024-07-16 00:34:56.330975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.817 [2024-07-16 00:34:56.330988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f6c0 with addr=10.0.0.2, port=4420 00:23:42.817 [2024-07-16 00:34:56.330995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f6c0 is same with the state(5) to be set 00:23:42.817 [2024-07-16 00:34:56.331004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb000 (9): Bad file descriptor 00:23:42.817 [2024-07-16 00:34:56.331013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:42.817 [2024-07-16 00:34:56.331019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:42.817 [2024-07-16 00:34:56.331028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:42.817 [2024-07-16 00:34:56.331039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:42.817 [2024-07-16 00:34:56.331045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:42.817 [2024-07-16 00:34:56.331052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:42.817 [2024-07-16 00:34:56.331062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.331068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.331075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:42.818 [2024-07-16 00:34:56.331087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.331094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.331101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:42.818 [2024-07-16 00:34:56.331132] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.818 [2024-07-16 00:34:56.331143] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.818 [2024-07-16 00:34:56.331154] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.818 [2024-07-16 00:34:56.331164] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.818 [2024-07-16 00:34:56.331175] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.818 [2024-07-16 00:34:56.331513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.331524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.331534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.331540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.331552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231fd40 (9): Bad file descriptor 00:23:42.818 [2024-07-16 00:34:56.331563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2343890 (9): Bad file descriptor 00:23:42.818 [2024-07-16 00:34:56.331572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d08e0 (9): Bad file descriptor 00:23:42.818 [2024-07-16 00:34:56.331582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f6c0 (9): Bad file descriptor 00:23:42.818 [2024-07-16 00:34:56.331590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.331597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.331604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:42.818 [2024-07-16 00:34:56.331867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:42.818 [2024-07-16 00:34:56.331883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.331898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.331906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.331913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:42.818 [2024-07-16 00:34:56.331923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.331931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.331938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:42.818 [2024-07-16 00:34:56.331948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.331955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.331962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:42.818 [2024-07-16 00:34:56.331972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.331979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.331985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:42.818 [2024-07-16 00:34:56.332019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.332026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.332034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.332040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.818 [2024-07-16 00:34:56.332393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.818 [2024-07-16 00:34:56.332406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b8a50 with addr=10.0.0.2, port=4420 00:23:42.818 [2024-07-16 00:34:56.332415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8a50 is same with the state(5) to be set 00:23:42.818 [2024-07-16 00:34:56.332446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b8a50 (9): Bad file descriptor 00:23:42.818 [2024-07-16 00:34:56.332480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:42.818 [2024-07-16 00:34:56.332487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:42.818 [2024-07-16 00:34:56.332494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:42.818 [2024-07-16 00:34:56.332525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.079 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:43.079 00:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1171473 00:23:44.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1171473) - No such process 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.023 rmmod nvme_tcp 00:23:44.023 rmmod nvme_fabrics 00:23:44.023 rmmod nvme_keyring 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.023 00:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.567 00:34:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.567 00:23:46.567 real 0m7.716s 00:23:46.567 user 0m18.490s 00:23:46.567 sys 0m1.230s 00:23:46.567 00:34:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.567 00:34:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.567 ************************************ 00:23:46.567 END TEST nvmf_shutdown_tc3 00:23:46.567 ************************************ 00:23:46.567 00:34:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:46.567 00:34:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:46.567 00:23:46.567 real 0m33.100s 00:23:46.567 user 1m16.905s 00:23:46.567 sys 0m9.580s 00:23:46.567 00:34:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.567 00:34:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:46.567 ************************************ 00:23:46.567 END TEST nvmf_shutdown 00:23:46.567 ************************************ 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:46.567 00:34:59 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.567 00:34:59 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.567 00:34:59 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:46.567 00:34:59 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.567 00:34:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.567 ************************************ 00:23:46.567 START TEST nvmf_multicontroller 00:23:46.567 ************************************ 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:46.567 * Looking for test storage... 00:23:46.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.567 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.568 00:34:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:54.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:54.867 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.867 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:54.867 Found net devices under 0000:31:00.0: cvl_0_0 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:54.868 Found net devices under 0000:31:00.1: cvl_0_1 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.868 00:35:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:23:54.868 00:23:54.868 --- 10.0.0.2 ping statistics --- 00:23:54.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.868 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:23:54.868 00:23:54.868 --- 10.0.0.1 ping statistics --- 00:23:54.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.868 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1176891 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1176891 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1176891 ']' 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.868 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.868 [2024-07-16 00:35:08.134865] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:54.868 [2024-07-16 00:35:08.134931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.868 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.868 [2024-07-16 00:35:08.231224] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:54.868 [2024-07-16 00:35:08.323497] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.868 [2024-07-16 00:35:08.323550] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.868 [2024-07-16 00:35:08.323558] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.868 [2024-07-16 00:35:08.323565] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.868 [2024-07-16 00:35:08.323572] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.868 [2024-07-16 00:35:08.323661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.868 [2024-07-16 00:35:08.323825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.868 [2024-07-16 00:35:08.323825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 [2024-07-16 00:35:08.953595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 Malloc0 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 [2024-07-16 00:35:09.019557] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 [2024-07-16 00:35:09.031511] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 Malloc1 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.441 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1177236 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1177236 /var/tmp/bdevperf.sock 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1177236 ']' 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.702 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.645 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.645 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:56.645 00:35:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:56.645 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.645 00:35:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.645 NVMe0n1 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.645 1 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.645 request: 00:23:56.645 { 00:23:56.645 "name": "NVMe0", 00:23:56.645 "trtype": "tcp", 00:23:56.645 "traddr": "10.0.0.2", 00:23:56.645 "adrfam": "ipv4", 00:23:56.645 "trsvcid": "4420", 00:23:56.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.645 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:56.645 "hostaddr": "10.0.0.2", 00:23:56.645 "hostsvcid": "60000", 00:23:56.645 "prchk_reftag": false, 00:23:56.645 "prchk_guard": false, 00:23:56.645 "hdgst": false, 00:23:56.645 "ddgst": false, 00:23:56.645 "method": "bdev_nvme_attach_controller", 00:23:56.645 "req_id": 1 00:23:56.645 } 00:23:56.645 Got JSON-RPC error response 00:23:56.645 response: 00:23:56.645 { 00:23:56.645 "code": -114, 00:23:56.645 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:56.645 } 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:56.645 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.646 request: 00:23:56.646 { 00:23:56.646 "name": "NVMe0", 00:23:56.646 "trtype": "tcp", 00:23:56.646 "traddr": "10.0.0.2", 00:23:56.646 "adrfam": "ipv4", 00:23:56.646 "trsvcid": "4420", 00:23:56.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:56.646 "hostaddr": "10.0.0.2", 00:23:56.646 "hostsvcid": "60000", 00:23:56.646 "prchk_reftag": false, 00:23:56.646 "prchk_guard": false, 00:23:56.646 "hdgst": false, 00:23:56.646 "ddgst": false, 00:23:56.646 "method": "bdev_nvme_attach_controller", 00:23:56.646 "req_id": 1 00:23:56.646 } 00:23:56.646 Got JSON-RPC error response 00:23:56.646 response: 00:23:56.646 { 00:23:56.646 "code": -114, 00:23:56.646 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:56.646 } 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.646 request: 00:23:56.646 { 00:23:56.646 "name": "NVMe0", 00:23:56.646 "trtype": "tcp", 00:23:56.646 "traddr": "10.0.0.2", 00:23:56.646 "adrfam": "ipv4", 00:23:56.646 "trsvcid": "4420", 00:23:56.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.646 "hostaddr": "10.0.0.2", 00:23:56.646 "hostsvcid": "60000", 00:23:56.646 "prchk_reftag": false, 00:23:56.646 "prchk_guard": false, 00:23:56.646 "hdgst": false, 00:23:56.646 "ddgst": false, 00:23:56.646 "multipath": "disable", 00:23:56.646 "method": "bdev_nvme_attach_controller", 00:23:56.646 "req_id": 1 00:23:56.646 } 00:23:56.646 Got JSON-RPC error response 00:23:56.646 response: 00:23:56.646 { 00:23:56.646 "code": -114, 00:23:56.646 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:56.646 } 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.646 request: 00:23:56.646 { 00:23:56.646 "name": "NVMe0", 00:23:56.646 "trtype": "tcp", 00:23:56.646 "traddr": "10.0.0.2", 00:23:56.646 "adrfam": "ipv4", 00:23:56.646 "trsvcid": "4420", 00:23:56.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.646 "hostaddr": "10.0.0.2", 00:23:56.646 "hostsvcid": "60000", 00:23:56.646 "prchk_reftag": false, 00:23:56.646 "prchk_guard": false, 00:23:56.646 "hdgst": false, 00:23:56.646 "ddgst": false, 00:23:56.646 "multipath": "failover", 00:23:56.646 "method": "bdev_nvme_attach_controller", 00:23:56.646 "req_id": 1 00:23:56.646 } 00:23:56.646 Got JSON-RPC error response 00:23:56.646 response: 00:23:56.646 { 00:23:56.646 "code": -114, 00:23:56.646 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:56.646 } 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.646 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.907 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.907 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:56.907 00:35:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.295 0 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1177236 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1177236 ']' 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1177236 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1177236 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1177236' 00:23:58.295 killing process with pid 1177236 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1177236 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1177236 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:58.295 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:58.295 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:58.295 [2024-07-16 00:35:09.151434] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:58.295 [2024-07-16 00:35:09.151493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177236 ] 00:23:58.295 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.295 [2024-07-16 00:35:09.217740] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.295 [2024-07-16 00:35:09.282037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.295 [2024-07-16 00:35:10.409334] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 47148060-e1e1-44f6-be4a-e525210cec95 already exists 00:23:58.295 [2024-07-16 00:35:10.409364] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:47148060-e1e1-44f6-be4a-e525210cec95 alias for bdev NVMe1n1 00:23:58.295 [2024-07-16 00:35:10.409373] bdev_nvme.c:4322:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:58.295 Running I/O for 1 seconds... 00:23:58.295 00:23:58.295 Latency(us) 00:23:58.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.296 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:58.296 NVMe0n1 : 1.00 25126.60 98.15 0.00 0.00 5082.71 3904.85 15182.51 00:23:58.296 =================================================================================================================== 00:23:58.296 Total : 25126.60 98.15 0.00 0.00 5082.71 3904.85 15182.51 00:23:58.296 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.296 00:23:58.296 Latency(us) 00:23:58.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.296 =================================================================================================================== 00:23:58.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.296 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.296 rmmod nvme_tcp 00:23:58.296 rmmod nvme_fabrics 00:23:58.296 rmmod nvme_keyring 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1176891 ']' 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1176891 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1176891 ']' 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1176891 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1176891 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1176891' 00:23:58.296 killing process with pid 1176891 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1176891 00:23:58.296 00:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1176891 00:23:58.557 00:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.558 00:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.558 00:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.558 00:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.558 00:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.558 00:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.558 00:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.558 00:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.102 00:35:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.102 00:24:01.102 real 0m14.298s 00:24:01.102 user 0m16.401s 00:24:01.102 sys 0m6.763s 00:24:01.102 00:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.102 00:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.102 ************************************ 00:24:01.102 END TEST nvmf_multicontroller 00:24:01.102 ************************************ 00:24:01.102 00:35:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:01.102 00:35:14 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:01.102 00:35:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.102 00:35:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.102 00:35:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.102 ************************************ 00:24:01.102 START TEST nvmf_aer 00:24:01.102 ************************************ 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:01.102 * Looking for test storage... 00:24:01.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.102 00:35:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.103 00:35:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.242 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:09.243 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:09.243 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:09.243 Found net devices under 0000:31:00.0: cvl_0_0 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:09.243 Found net devices under 0000:31:00.1: cvl_0_1 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:24:09.243 00:24:09.243 --- 10.0.0.2 ping statistics --- 00:24:09.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.243 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.484 ms 00:24:09.243 00:24:09.243 --- 10.0.0.1 ping statistics --- 00:24:09.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.243 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1182393 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1182393 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1182393 ']' 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.243 00:35:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:09.243 [2024-07-16 00:35:22.696875] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:09.243 [2024-07-16 00:35:22.696944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.243 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.243 [2024-07-16 00:35:22.775841] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.243 [2024-07-16 00:35:22.850827] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.243 [2024-07-16 00:35:22.850865] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.243 [2024-07-16 00:35:22.850873] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.243 [2024-07-16 00:35:22.850880] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.243 [2024-07-16 00:35:22.850886] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.243 [2024-07-16 00:35:22.851053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.243 [2024-07-16 00:35:22.851167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.243 [2024-07-16 00:35:22.851228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.243 [2024-07-16 00:35:22.851239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 [2024-07-16 00:35:23.528864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 Malloc0 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 [2024-07-16 00:35:23.588266] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 [ 00:24:10.184 { 00:24:10.184 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:10.184 "subtype": "Discovery", 00:24:10.184 "listen_addresses": [], 00:24:10.184 "allow_any_host": true, 00:24:10.184 "hosts": [] 00:24:10.184 }, 00:24:10.184 { 00:24:10.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.184 "subtype": "NVMe", 00:24:10.184 "listen_addresses": [ 00:24:10.184 { 00:24:10.184 "trtype": "TCP", 00:24:10.184 "adrfam": "IPv4", 00:24:10.184 "traddr": "10.0.0.2", 00:24:10.184 "trsvcid": "4420" 00:24:10.184 } 00:24:10.184 ], 00:24:10.184 "allow_any_host": true, 00:24:10.184 "hosts": [], 00:24:10.184 "serial_number": "SPDK00000000000001", 00:24:10.184 "model_number": "SPDK bdev Controller", 00:24:10.184 "max_namespaces": 2, 00:24:10.184 "min_cntlid": 1, 00:24:10.184 "max_cntlid": 65519, 00:24:10.184 "namespaces": [ 00:24:10.184 { 00:24:10.184 "nsid": 1, 00:24:10.184 "bdev_name": "Malloc0", 00:24:10.184 "name": "Malloc0", 00:24:10.184 "nguid": "4C009EAF6F784FC1A1901418B81A9CE4", 00:24:10.184 "uuid": "4c009eaf-6f78-4fc1-a190-1418b81a9ce4" 00:24:10.184 } 00:24:10.184 ] 00:24:10.184 } 00:24:10.184 ] 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1182624 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:10.184 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:10.184 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.446 Malloc1 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.446 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.446 Asynchronous Event Request test 00:24:10.446 Attaching to 10.0.0.2 00:24:10.446 Attached to 10.0.0.2 00:24:10.446 Registering asynchronous event callbacks... 00:24:10.447 Starting namespace attribute notice tests for all controllers... 00:24:10.447 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:10.447 aer_cb - Changed Namespace 00:24:10.447 Cleaning up... 00:24:10.447 [ 00:24:10.447 { 00:24:10.447 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:10.447 "subtype": "Discovery", 00:24:10.447 "listen_addresses": [], 00:24:10.447 "allow_any_host": true, 00:24:10.447 "hosts": [] 00:24:10.447 }, 00:24:10.447 { 00:24:10.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.447 "subtype": "NVMe", 00:24:10.447 "listen_addresses": [ 00:24:10.447 { 00:24:10.447 "trtype": "TCP", 00:24:10.447 "adrfam": "IPv4", 00:24:10.447 "traddr": "10.0.0.2", 00:24:10.447 "trsvcid": "4420" 00:24:10.447 } 00:24:10.447 ], 00:24:10.447 "allow_any_host": true, 00:24:10.447 "hosts": [], 00:24:10.447 "serial_number": "SPDK00000000000001", 00:24:10.447 "model_number": "SPDK bdev Controller", 00:24:10.447 "max_namespaces": 2, 00:24:10.447 "min_cntlid": 1, 00:24:10.447 "max_cntlid": 65519, 00:24:10.447 "namespaces": [ 00:24:10.447 { 00:24:10.447 "nsid": 1, 00:24:10.447 "bdev_name": "Malloc0", 00:24:10.447 "name": "Malloc0", 00:24:10.447 "nguid": "4C009EAF6F784FC1A1901418B81A9CE4", 00:24:10.447 "uuid": "4c009eaf-6f78-4fc1-a190-1418b81a9ce4" 00:24:10.447 }, 00:24:10.447 { 00:24:10.447 "nsid": 2, 00:24:10.447 "bdev_name": "Malloc1", 00:24:10.447 "name": "Malloc1", 00:24:10.447 "nguid": "B4AD3AEE11FD4E5FB87A73C88ECB4C5F", 00:24:10.447 "uuid": "b4ad3aee-11fd-4e5f-b87a-73c88ecb4c5f" 00:24:10.447 } 00:24:10.447 ] 00:24:10.447 } 00:24:10.447 ] 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1182624 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.447 00:35:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.447 rmmod nvme_tcp 00:24:10.447 rmmod nvme_fabrics 00:24:10.447 rmmod nvme_keyring 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1182393 ']' 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1182393 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1182393 ']' 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1182393 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1182393 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1182393' 00:24:10.447 killing process with pid 1182393 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1182393 00:24:10.447 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1182393 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.709 00:35:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.256 00:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.256 00:24:13.256 real 0m12.047s 00:24:13.256 user 0m7.781s 00:24:13.256 sys 0m6.582s 00:24:13.256 00:35:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:13.256 00:35:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.256 ************************************ 00:24:13.256 END TEST nvmf_aer 00:24:13.256 ************************************ 00:24:13.256 00:35:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:13.256 00:35:26 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:13.256 00:35:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:13.256 00:35:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:13.256 00:35:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:13.256 ************************************ 00:24:13.256 START TEST nvmf_async_init 00:24:13.256 ************************************ 00:24:13.256 00:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:13.256 * Looking for test storage... 00:24:13.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0e627c34940d42949ff1f2e6b0280317 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:13.257 00:35:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:21.403 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:21.403 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.403 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:21.404 Found net devices under 0000:31:00.0: cvl_0_0 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:21.404 Found net devices under 0000:31:00.1: cvl_0_1 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:24:21.404 00:24:21.404 --- 10.0.0.2 ping statistics --- 00:24:21.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.404 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:24:21.404 00:24:21.404 --- 10.0.0.1 ping statistics --- 00:24:21.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.404 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1187319 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1187319 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1187319 ']' 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.404 00:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:21.404 [2024-07-16 00:35:34.802002] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:21.404 [2024-07-16 00:35:34.802069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.404 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.404 [2024-07-16 00:35:34.882566] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.404 [2024-07-16 00:35:34.956073] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.404 [2024-07-16 00:35:34.956113] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.404 [2024-07-16 00:35:34.956121] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.404 [2024-07-16 00:35:34.956127] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.404 [2024-07-16 00:35:34.956133] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.404 [2024-07-16 00:35:34.956158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.975 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.236 [2024-07-16 00:35:35.610956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.236 null0 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.236 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0e627c34940d42949ff1f2e6b0280317 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.237 [2024-07-16 00:35:35.667190] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.237 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 nvme0n1 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 [ 00:24:22.498 { 00:24:22.498 "name": "nvme0n1", 00:24:22.498 "aliases": [ 00:24:22.498 "0e627c34-940d-4294-9ff1-f2e6b0280317" 00:24:22.498 ], 00:24:22.498 "product_name": "NVMe disk", 00:24:22.498 "block_size": 512, 00:24:22.498 "num_blocks": 2097152, 00:24:22.498 "uuid": "0e627c34-940d-4294-9ff1-f2e6b0280317", 00:24:22.498 "assigned_rate_limits": { 00:24:22.498 "rw_ios_per_sec": 0, 00:24:22.498 "rw_mbytes_per_sec": 0, 00:24:22.498 "r_mbytes_per_sec": 0, 00:24:22.498 "w_mbytes_per_sec": 0 00:24:22.498 }, 00:24:22.498 "claimed": false, 00:24:22.498 "zoned": false, 00:24:22.498 "supported_io_types": { 00:24:22.498 "read": true, 00:24:22.498 "write": true, 00:24:22.498 "unmap": false, 00:24:22.498 "flush": true, 00:24:22.498 "reset": true, 00:24:22.498 "nvme_admin": true, 00:24:22.498 "nvme_io": true, 00:24:22.498 "nvme_io_md": false, 00:24:22.498 "write_zeroes": true, 00:24:22.498 "zcopy": false, 00:24:22.498 "get_zone_info": false, 00:24:22.498 "zone_management": false, 00:24:22.498 "zone_append": false, 00:24:22.498 "compare": true, 00:24:22.498 "compare_and_write": true, 00:24:22.498 "abort": true, 00:24:22.498 "seek_hole": false, 00:24:22.498 "seek_data": false, 00:24:22.498 "copy": true, 00:24:22.498 "nvme_iov_md": false 00:24:22.498 }, 00:24:22.498 "memory_domains": [ 00:24:22.498 { 00:24:22.498 "dma_device_id": "system", 00:24:22.498 "dma_device_type": 1 00:24:22.498 } 00:24:22.498 ], 00:24:22.498 "driver_specific": { 00:24:22.498 "nvme": [ 00:24:22.498 { 00:24:22.498 "trid": { 00:24:22.498 "trtype": "TCP", 00:24:22.498 "adrfam": "IPv4", 00:24:22.498 "traddr": "10.0.0.2", 00:24:22.498 "trsvcid": "4420", 00:24:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.498 }, 00:24:22.498 "ctrlr_data": { 00:24:22.498 "cntlid": 1, 00:24:22.498 "vendor_id": "0x8086", 00:24:22.498 "model_number": "SPDK bdev Controller", 00:24:22.498 "serial_number": "00000000000000000000", 00:24:22.498 "firmware_revision": "24.09", 00:24:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.498 "oacs": { 00:24:22.498 "security": 0, 00:24:22.498 "format": 0, 00:24:22.498 "firmware": 0, 00:24:22.498 "ns_manage": 0 00:24:22.498 }, 00:24:22.498 "multi_ctrlr": true, 00:24:22.498 "ana_reporting": false 00:24:22.498 }, 00:24:22.498 "vs": { 00:24:22.498 "nvme_version": "1.3" 00:24:22.498 }, 00:24:22.498 "ns_data": { 00:24:22.498 "id": 1, 00:24:22.498 "can_share": true 00:24:22.498 } 00:24:22.498 } 00:24:22.498 ], 00:24:22.498 "mp_policy": "active_passive" 00:24:22.498 } 00:24:22.498 } 00:24:22.498 ] 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.498 00:35:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 [2024-07-16 00:35:35.936043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:22.498 [2024-07-16 00:35:35.936103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20610 (9): Bad file descriptor 00:24:22.498 [2024-07-16 00:35:36.068329] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 [ 00:24:22.498 { 00:24:22.498 "name": "nvme0n1", 00:24:22.498 "aliases": [ 00:24:22.498 "0e627c34-940d-4294-9ff1-f2e6b0280317" 00:24:22.498 ], 00:24:22.498 "product_name": "NVMe disk", 00:24:22.498 "block_size": 512, 00:24:22.498 "num_blocks": 2097152, 00:24:22.498 "uuid": "0e627c34-940d-4294-9ff1-f2e6b0280317", 00:24:22.498 "assigned_rate_limits": { 00:24:22.498 "rw_ios_per_sec": 0, 00:24:22.498 "rw_mbytes_per_sec": 0, 00:24:22.498 "r_mbytes_per_sec": 0, 00:24:22.498 "w_mbytes_per_sec": 0 00:24:22.498 }, 00:24:22.498 "claimed": false, 00:24:22.498 "zoned": false, 00:24:22.498 "supported_io_types": { 00:24:22.498 "read": true, 00:24:22.498 "write": true, 00:24:22.498 "unmap": false, 00:24:22.498 "flush": true, 00:24:22.498 "reset": true, 00:24:22.498 "nvme_admin": true, 00:24:22.498 "nvme_io": true, 00:24:22.498 "nvme_io_md": false, 00:24:22.498 "write_zeroes": true, 00:24:22.498 "zcopy": false, 00:24:22.498 "get_zone_info": false, 00:24:22.498 "zone_management": false, 00:24:22.498 "zone_append": false, 00:24:22.498 "compare": true, 00:24:22.498 "compare_and_write": true, 00:24:22.498 "abort": true, 00:24:22.498 "seek_hole": false, 00:24:22.498 "seek_data": false, 00:24:22.498 "copy": true, 00:24:22.498 "nvme_iov_md": false 00:24:22.498 }, 00:24:22.498 "memory_domains": [ 00:24:22.498 { 00:24:22.498 "dma_device_id": "system", 00:24:22.498 "dma_device_type": 1 00:24:22.498 } 00:24:22.498 ], 00:24:22.498 "driver_specific": { 00:24:22.498 "nvme": [ 00:24:22.498 { 00:24:22.498 "trid": { 00:24:22.498 "trtype": "TCP", 00:24:22.498 "adrfam": "IPv4", 00:24:22.498 "traddr": "10.0.0.2", 00:24:22.498 "trsvcid": "4420", 00:24:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.498 }, 00:24:22.498 "ctrlr_data": { 00:24:22.498 "cntlid": 2, 00:24:22.498 "vendor_id": "0x8086", 00:24:22.498 "model_number": "SPDK bdev Controller", 00:24:22.498 "serial_number": "00000000000000000000", 00:24:22.498 "firmware_revision": "24.09", 00:24:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.498 "oacs": { 00:24:22.498 "security": 0, 00:24:22.498 "format": 0, 00:24:22.498 "firmware": 0, 00:24:22.498 "ns_manage": 0 00:24:22.498 }, 00:24:22.498 "multi_ctrlr": true, 00:24:22.498 "ana_reporting": false 00:24:22.498 }, 00:24:22.498 "vs": { 00:24:22.498 "nvme_version": "1.3" 00:24:22.498 }, 00:24:22.498 "ns_data": { 00:24:22.498 "id": 1, 00:24:22.498 "can_share": true 00:24:22.498 } 00:24:22.498 } 00:24:22.498 ], 00:24:22.498 "mp_policy": "active_passive" 00:24:22.498 } 00:24:22.498 } 00:24:22.498 ] 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.paxUHbMGKC 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.paxUHbMGKC 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.498 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.759 [2024-07-16 00:35:36.132655] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.759 [2024-07-16 00:35:36.132767] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.paxUHbMGKC 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.759 [2024-07-16 00:35:36.144682] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.paxUHbMGKC 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.759 [2024-07-16 00:35:36.156730] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.759 [2024-07-16 00:35:36.156767] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:22.759 nvme0n1 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.759 [ 00:24:22.759 { 00:24:22.759 "name": "nvme0n1", 00:24:22.759 "aliases": [ 00:24:22.759 "0e627c34-940d-4294-9ff1-f2e6b0280317" 00:24:22.759 ], 00:24:22.759 "product_name": "NVMe disk", 00:24:22.759 "block_size": 512, 00:24:22.759 "num_blocks": 2097152, 00:24:22.759 "uuid": "0e627c34-940d-4294-9ff1-f2e6b0280317", 00:24:22.759 "assigned_rate_limits": { 00:24:22.759 "rw_ios_per_sec": 0, 00:24:22.759 "rw_mbytes_per_sec": 0, 00:24:22.759 "r_mbytes_per_sec": 0, 00:24:22.759 "w_mbytes_per_sec": 0 00:24:22.759 }, 00:24:22.759 "claimed": false, 00:24:22.759 "zoned": false, 00:24:22.759 "supported_io_types": { 00:24:22.759 "read": true, 00:24:22.759 "write": true, 00:24:22.759 "unmap": false, 00:24:22.759 "flush": true, 00:24:22.759 "reset": true, 00:24:22.759 "nvme_admin": true, 00:24:22.759 "nvme_io": true, 00:24:22.759 "nvme_io_md": false, 00:24:22.759 "write_zeroes": true, 00:24:22.759 "zcopy": false, 00:24:22.759 "get_zone_info": false, 00:24:22.759 "zone_management": false, 00:24:22.759 "zone_append": false, 00:24:22.759 "compare": true, 00:24:22.759 "compare_and_write": true, 00:24:22.759 "abort": true, 00:24:22.759 "seek_hole": false, 00:24:22.759 "seek_data": false, 00:24:22.759 "copy": true, 00:24:22.759 "nvme_iov_md": false 00:24:22.759 }, 00:24:22.759 "memory_domains": [ 00:24:22.759 { 00:24:22.759 "dma_device_id": "system", 00:24:22.759 "dma_device_type": 1 00:24:22.759 } 00:24:22.759 ], 00:24:22.759 "driver_specific": { 00:24:22.759 "nvme": [ 00:24:22.759 { 00:24:22.759 "trid": { 00:24:22.759 "trtype": "TCP", 00:24:22.759 "adrfam": "IPv4", 00:24:22.759 "traddr": "10.0.0.2", 00:24:22.759 "trsvcid": "4421", 00:24:22.759 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.759 }, 00:24:22.759 "ctrlr_data": { 00:24:22.759 "cntlid": 3, 00:24:22.759 "vendor_id": "0x8086", 00:24:22.759 "model_number": "SPDK bdev Controller", 00:24:22.759 "serial_number": "00000000000000000000", 00:24:22.759 "firmware_revision": "24.09", 00:24:22.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.759 "oacs": { 00:24:22.759 "security": 0, 00:24:22.759 "format": 0, 00:24:22.759 "firmware": 0, 00:24:22.759 "ns_manage": 0 00:24:22.759 }, 00:24:22.759 "multi_ctrlr": true, 00:24:22.759 "ana_reporting": false 00:24:22.759 }, 00:24:22.759 "vs": { 00:24:22.759 "nvme_version": "1.3" 00:24:22.759 }, 00:24:22.759 "ns_data": { 00:24:22.759 "id": 1, 00:24:22.759 "can_share": true 00:24:22.759 } 00:24:22.759 } 00:24:22.759 ], 00:24:22.759 "mp_policy": "active_passive" 00:24:22.759 } 00:24:22.759 } 00:24:22.759 ] 00:24:22.759 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.paxUHbMGKC 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.760 rmmod nvme_tcp 00:24:22.760 rmmod nvme_fabrics 00:24:22.760 rmmod nvme_keyring 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1187319 ']' 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1187319 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1187319 ']' 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1187319 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.760 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187319 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187319' 00:24:23.020 killing process with pid 1187319 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1187319 00:24:23.020 [2024-07-16 00:35:36.412432] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:23.020 [2024-07-16 00:35:36.412459] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1187319 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.020 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.021 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.021 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.021 00:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.021 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.021 00:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.568 00:35:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.568 00:24:25.568 real 0m12.245s 00:24:25.568 user 0m4.248s 00:24:25.568 sys 0m6.458s 00:24:25.568 00:35:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.568 00:35:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.568 ************************************ 00:24:25.568 END TEST nvmf_async_init 00:24:25.568 ************************************ 00:24:25.568 00:35:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.568 00:35:38 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:25.568 00:35:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.568 00:35:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.568 00:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.568 ************************************ 00:24:25.568 START TEST dma 00:24:25.568 ************************************ 00:24:25.568 00:35:38 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:25.568 * Looking for test storage... 00:24:25.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.569 00:35:38 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.569 00:35:38 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.569 00:35:38 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.569 00:35:38 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.569 00:35:38 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:38 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:38 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:38 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:25.569 00:35:38 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.569 00:35:38 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.569 00:35:38 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:25.569 00:35:38 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:25.569 00:24:25.569 real 0m0.133s 00:24:25.569 user 0m0.060s 00:24:25.569 sys 0m0.082s 00:24:25.569 00:35:38 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.569 00:35:38 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:25.569 ************************************ 00:24:25.569 END TEST dma 00:24:25.569 ************************************ 00:24:25.569 00:35:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.569 00:35:38 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:25.569 00:35:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.569 00:35:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.569 00:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.569 ************************************ 00:24:25.569 START TEST nvmf_identify 00:24:25.569 ************************************ 00:24:25.569 00:35:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:25.569 * Looking for test storage... 00:24:25.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.569 00:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:33.718 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:33.718 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:33.718 Found net devices under 0000:31:00.0: cvl_0_0 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:33.718 Found net devices under 0000:31:00.1: cvl_0_1 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.718 00:35:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:24:33.718 00:24:33.718 --- 10.0.0.2 ping statistics --- 00:24:33.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.718 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:24:33.718 00:24:33.718 --- 10.0.0.1 ping statistics --- 00:24:33.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.718 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.718 00:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1192384 00:24:33.719 00:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.719 00:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.719 00:35:47 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1192384 00:24:33.719 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1192384 ']' 00:24:33.719 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.719 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.719 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.979 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.979 00:35:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.979 [2024-07-16 00:35:47.398014] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:33.979 [2024-07-16 00:35:47.398067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.979 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.979 [2024-07-16 00:35:47.472629] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.979 [2024-07-16 00:35:47.542760] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.979 [2024-07-16 00:35:47.542794] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.979 [2024-07-16 00:35:47.542805] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.979 [2024-07-16 00:35:47.542811] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.979 [2024-07-16 00:35:47.542816] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.979 [2024-07-16 00:35:47.542954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.979 [2024-07-16 00:35:47.543072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.979 [2024-07-16 00:35:47.543236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.979 [2024-07-16 00:35:47.543253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.561 [2024-07-16 00:35:48.177712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.561 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 Malloc0 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 [2024-07-16 00:35:48.277217] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.822 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 [ 00:24:34.822 { 00:24:34.823 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:34.823 "subtype": "Discovery", 00:24:34.823 "listen_addresses": [ 00:24:34.823 { 00:24:34.823 "trtype": "TCP", 00:24:34.823 "adrfam": "IPv4", 00:24:34.823 "traddr": "10.0.0.2", 00:24:34.823 "trsvcid": "4420" 00:24:34.823 } 00:24:34.823 ], 00:24:34.823 "allow_any_host": true, 00:24:34.823 "hosts": [] 00:24:34.823 }, 00:24:34.823 { 00:24:34.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.823 "subtype": "NVMe", 00:24:34.823 "listen_addresses": [ 00:24:34.823 { 00:24:34.823 "trtype": "TCP", 00:24:34.823 "adrfam": "IPv4", 00:24:34.823 "traddr": "10.0.0.2", 00:24:34.823 "trsvcid": "4420" 00:24:34.823 } 00:24:34.823 ], 00:24:34.823 "allow_any_host": true, 00:24:34.823 "hosts": [], 00:24:34.823 "serial_number": "SPDK00000000000001", 00:24:34.823 "model_number": "SPDK bdev Controller", 00:24:34.823 "max_namespaces": 32, 00:24:34.823 "min_cntlid": 1, 00:24:34.823 "max_cntlid": 65519, 00:24:34.823 "namespaces": [ 00:24:34.823 { 00:24:34.823 "nsid": 1, 00:24:34.823 "bdev_name": "Malloc0", 00:24:34.823 "name": "Malloc0", 00:24:34.823 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:34.823 "eui64": "ABCDEF0123456789", 00:24:34.823 "uuid": "cade7895-6787-4a8e-920c-ed2bfab0b69f" 00:24:34.823 } 00:24:34.823 ] 00:24:34.823 } 00:24:34.823 ] 00:24:34.823 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.823 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:34.823 [2024-07-16 00:35:48.340112] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:34.823 [2024-07-16 00:35:48.340152] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192732 ] 00:24:34.823 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.823 [2024-07-16 00:35:48.373909] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:34.823 [2024-07-16 00:35:48.373960] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:34.823 [2024-07-16 00:35:48.373965] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:34.823 [2024-07-16 00:35:48.373976] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:34.823 [2024-07-16 00:35:48.373982] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:34.823 [2024-07-16 00:35:48.374690] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:34.823 [2024-07-16 00:35:48.374723] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf92ec0 0 00:24:34.823 [2024-07-16 00:35:48.385236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:34.823 [2024-07-16 00:35:48.385252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:34.823 [2024-07-16 00:35:48.385256] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:34.823 [2024-07-16 00:35:48.385260] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:34.823 [2024-07-16 00:35:48.385286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.385291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.385295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.823 [2024-07-16 00:35:48.385308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:34.823 [2024-07-16 00:35:48.385324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.823 [2024-07-16 00:35:48.393240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.823 [2024-07-16 00:35:48.393250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.823 [2024-07-16 00:35:48.393253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.823 [2024-07-16 00:35:48.393267] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:34.823 [2024-07-16 00:35:48.393274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:34.823 [2024-07-16 00:35:48.393279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:34.823 [2024-07-16 00:35:48.393298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.823 [2024-07-16 00:35:48.393313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.823 [2024-07-16 00:35:48.393325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.823 [2024-07-16 00:35:48.393538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.823 [2024-07-16 00:35:48.393545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.823 [2024-07-16 00:35:48.393549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.823 [2024-07-16 00:35:48.393558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:34.823 [2024-07-16 00:35:48.393565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:34.823 [2024-07-16 00:35:48.393571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.823 [2024-07-16 00:35:48.393585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.823 [2024-07-16 00:35:48.393595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.823 [2024-07-16 00:35:48.393800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.823 [2024-07-16 00:35:48.393806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.823 [2024-07-16 00:35:48.393809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.823 [2024-07-16 00:35:48.393818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:34.823 [2024-07-16 00:35:48.393826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:34.823 [2024-07-16 00:35:48.393832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.393839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.823 [2024-07-16 00:35:48.393846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.823 [2024-07-16 00:35:48.393856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.823 [2024-07-16 00:35:48.394044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.823 [2024-07-16 00:35:48.394050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.823 [2024-07-16 00:35:48.394054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.823 [2024-07-16 00:35:48.394062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:34.823 [2024-07-16 00:35:48.394071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.823 [2024-07-16 00:35:48.394087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.823 [2024-07-16 00:35:48.394097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.823 [2024-07-16 00:35:48.394292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.823 [2024-07-16 00:35:48.394298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.823 [2024-07-16 00:35:48.394302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.823 [2024-07-16 00:35:48.394310] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:34.823 [2024-07-16 00:35:48.394315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:34.823 [2024-07-16 00:35:48.394322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:34.823 [2024-07-16 00:35:48.394427] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:34.823 [2024-07-16 00:35:48.394432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:34.823 [2024-07-16 00:35:48.394439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.823 [2024-07-16 00:35:48.394453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.823 [2024-07-16 00:35:48.394463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.823 [2024-07-16 00:35:48.394656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.823 [2024-07-16 00:35:48.394663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.823 [2024-07-16 00:35:48.394667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.823 [2024-07-16 00:35:48.394677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:34.823 [2024-07-16 00:35:48.394686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.823 [2024-07-16 00:35:48.394693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.823 [2024-07-16 00:35:48.394700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.824 [2024-07-16 00:35:48.394709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.824 [2024-07-16 00:35:48.394917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.824 [2024-07-16 00:35:48.394923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.824 [2024-07-16 00:35:48.394927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.394931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.824 [2024-07-16 00:35:48.394935] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:34.824 [2024-07-16 00:35:48.394940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:34.824 [2024-07-16 00:35:48.394950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:34.824 [2024-07-16 00:35:48.394963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:34.824 [2024-07-16 00:35:48.394972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.394976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.394983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.824 [2024-07-16 00:35:48.394994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.824 [2024-07-16 00:35:48.395244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.824 [2024-07-16 00:35:48.395252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.824 [2024-07-16 00:35:48.395256] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395260] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf92ec0): datao=0, datal=4096, cccid=0 00:24:34.824 [2024-07-16 00:35:48.395266] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1015fc0) on tqpair(0xf92ec0): expected_datao=0, payload_size=4096 00:24:34.824 [2024-07-16 00:35:48.395272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395282] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395286] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.824 [2024-07-16 00:35:48.395443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.824 [2024-07-16 00:35:48.395446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.824 [2024-07-16 00:35:48.395457] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:34.824 [2024-07-16 00:35:48.395462] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:34.824 [2024-07-16 00:35:48.395466] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:34.824 [2024-07-16 00:35:48.395471] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:34.824 [2024-07-16 00:35:48.395475] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:34.824 [2024-07-16 00:35:48.395480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:34.824 [2024-07-16 00:35:48.395487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:34.824 [2024-07-16 00:35:48.395496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.395511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:34.824 [2024-07-16 00:35:48.395522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.824 [2024-07-16 00:35:48.395697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.824 [2024-07-16 00:35:48.395703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.824 [2024-07-16 00:35:48.395708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:34.824 [2024-07-16 00:35:48.395719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.395733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.824 [2024-07-16 00:35:48.395739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.395751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.824 [2024-07-16 00:35:48.395757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.395770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.824 [2024-07-16 00:35:48.395776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.395788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.824 [2024-07-16 00:35:48.395793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:34.824 [2024-07-16 00:35:48.395803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:34.824 [2024-07-16 00:35:48.395809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.395813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.395819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.824 [2024-07-16 00:35:48.395831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1015fc0, cid 0, qid 0 00:24:34.824 [2024-07-16 00:35:48.395836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016140, cid 1, qid 0 00:24:34.824 [2024-07-16 00:35:48.395840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10162c0, cid 2, qid 0 00:24:34.824 [2024-07-16 00:35:48.395845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:34.824 [2024-07-16 00:35:48.395850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10165c0, cid 4, qid 0 00:24:34.824 [2024-07-16 00:35:48.396088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.824 [2024-07-16 00:35:48.396095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.824 [2024-07-16 00:35:48.396098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.396103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10165c0) on tqpair=0xf92ec0 00:24:34.824 [2024-07-16 00:35:48.396108] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:34.824 [2024-07-16 00:35:48.396115] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:34.824 [2024-07-16 00:35:48.396125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.396130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.396137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.824 [2024-07-16 00:35:48.396146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10165c0, cid 4, qid 0 00:24:34.824 [2024-07-16 00:35:48.396389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.824 [2024-07-16 00:35:48.396396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.824 [2024-07-16 00:35:48.396400] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.396403] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf92ec0): datao=0, datal=4096, cccid=4 00:24:34.824 [2024-07-16 00:35:48.396408] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10165c0) on tqpair(0xf92ec0): expected_datao=0, payload_size=4096 00:24:34.824 [2024-07-16 00:35:48.396412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.396419] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.396422] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.440236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.824 [2024-07-16 00:35:48.440246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.824 [2024-07-16 00:35:48.440249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.440253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10165c0) on tqpair=0xf92ec0 00:24:34.824 [2024-07-16 00:35:48.440266] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:34.824 [2024-07-16 00:35:48.440287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.440291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.440298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.824 [2024-07-16 00:35:48.440305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.440308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.440312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf92ec0) 00:24:34.824 [2024-07-16 00:35:48.440318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.824 [2024-07-16 00:35:48.440332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10165c0, cid 4, qid 0 00:24:34.824 [2024-07-16 00:35:48.440338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016740, cid 5, qid 0 00:24:34.824 [2024-07-16 00:35:48.440565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.824 [2024-07-16 00:35:48.440572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.824 [2024-07-16 00:35:48.440575] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.824 [2024-07-16 00:35:48.440579] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf92ec0): datao=0, datal=1024, cccid=4 00:24:34.824 [2024-07-16 00:35:48.440583] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10165c0) on tqpair(0xf92ec0): expected_datao=0, payload_size=1024 00:24:34.825 [2024-07-16 00:35:48.440588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.825 [2024-07-16 00:35:48.440594] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.825 [2024-07-16 00:35:48.440598] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.825 [2024-07-16 00:35:48.440606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.825 [2024-07-16 00:35:48.440612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.825 [2024-07-16 00:35:48.440616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.825 [2024-07-16 00:35:48.440619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016740) on tqpair=0xf92ec0 00:24:35.090 [2024-07-16 00:35:48.481435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.090 [2024-07-16 00:35:48.481444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.090 [2024-07-16 00:35:48.481448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10165c0) on tqpair=0xf92ec0 00:24:35.090 [2024-07-16 00:35:48.481466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf92ec0) 00:24:35.090 [2024-07-16 00:35:48.481477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.090 [2024-07-16 00:35:48.481493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10165c0, cid 4, qid 0 00:24:35.090 [2024-07-16 00:35:48.481751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.090 [2024-07-16 00:35:48.481758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.090 [2024-07-16 00:35:48.481761] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481765] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf92ec0): datao=0, datal=3072, cccid=4 00:24:35.090 [2024-07-16 00:35:48.481769] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10165c0) on tqpair(0xf92ec0): expected_datao=0, payload_size=3072 00:24:35.090 [2024-07-16 00:35:48.481774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481784] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.090 [2024-07-16 00:35:48.481954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.090 [2024-07-16 00:35:48.481957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10165c0) on tqpair=0xf92ec0 00:24:35.090 [2024-07-16 00:35:48.481969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.481973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf92ec0) 00:24:35.090 [2024-07-16 00:35:48.481979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.090 [2024-07-16 00:35:48.481992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10165c0, cid 4, qid 0 00:24:35.090 [2024-07-16 00:35:48.482182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.090 [2024-07-16 00:35:48.482188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.090 [2024-07-16 00:35:48.482192] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.482195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf92ec0): datao=0, datal=8, cccid=4 00:24:35.090 [2024-07-16 00:35:48.482199] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10165c0) on tqpair(0xf92ec0): expected_datao=0, payload_size=8 00:24:35.090 [2024-07-16 00:35:48.482204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.482210] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.482214] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.522395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.090 [2024-07-16 00:35:48.522406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.090 [2024-07-16 00:35:48.522412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.090 [2024-07-16 00:35:48.522416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10165c0) on tqpair=0xf92ec0 00:24:35.090 ===================================================== 00:24:35.090 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:35.090 ===================================================== 00:24:35.090 Controller Capabilities/Features 00:24:35.090 ================================ 00:24:35.090 Vendor ID: 0000 00:24:35.090 Subsystem Vendor ID: 0000 00:24:35.090 Serial Number: .................... 00:24:35.090 Model Number: ........................................ 00:24:35.090 Firmware Version: 24.09 00:24:35.090 Recommended Arb Burst: 0 00:24:35.090 IEEE OUI Identifier: 00 00 00 00:24:35.090 Multi-path I/O 00:24:35.090 May have multiple subsystem ports: No 00:24:35.090 May have multiple controllers: No 00:24:35.090 Associated with SR-IOV VF: No 00:24:35.090 Max Data Transfer Size: 131072 00:24:35.090 Max Number of Namespaces: 0 00:24:35.090 Max Number of I/O Queues: 1024 00:24:35.090 NVMe Specification Version (VS): 1.3 00:24:35.090 NVMe Specification Version (Identify): 1.3 00:24:35.090 Maximum Queue Entries: 128 00:24:35.090 Contiguous Queues Required: Yes 00:24:35.090 Arbitration Mechanisms Supported 00:24:35.090 Weighted Round Robin: Not Supported 00:24:35.090 Vendor Specific: Not Supported 00:24:35.090 Reset Timeout: 15000 ms 00:24:35.090 Doorbell Stride: 4 bytes 00:24:35.090 NVM Subsystem Reset: Not Supported 00:24:35.090 Command Sets Supported 00:24:35.090 NVM Command Set: Supported 00:24:35.090 Boot Partition: Not Supported 00:24:35.090 Memory Page Size Minimum: 4096 bytes 00:24:35.090 Memory Page Size Maximum: 4096 bytes 00:24:35.090 Persistent Memory Region: Not Supported 00:24:35.090 Optional Asynchronous Events Supported 00:24:35.090 Namespace Attribute Notices: Not Supported 00:24:35.090 Firmware Activation Notices: Not Supported 00:24:35.090 ANA Change Notices: Not Supported 00:24:35.090 PLE Aggregate Log Change Notices: Not Supported 00:24:35.090 LBA Status Info Alert Notices: Not Supported 00:24:35.090 EGE Aggregate Log Change Notices: Not Supported 00:24:35.090 Normal NVM Subsystem Shutdown event: Not Supported 00:24:35.091 Zone Descriptor Change Notices: Not Supported 00:24:35.091 Discovery Log Change Notices: Supported 00:24:35.091 Controller Attributes 00:24:35.091 128-bit Host Identifier: Not Supported 00:24:35.091 Non-Operational Permissive Mode: Not Supported 00:24:35.091 NVM Sets: Not Supported 00:24:35.091 Read Recovery Levels: Not Supported 00:24:35.091 Endurance Groups: Not Supported 00:24:35.091 Predictable Latency Mode: Not Supported 00:24:35.091 Traffic Based Keep ALive: Not Supported 00:24:35.091 Namespace Granularity: Not Supported 00:24:35.091 SQ Associations: Not Supported 00:24:35.091 UUID List: Not Supported 00:24:35.091 Multi-Domain Subsystem: Not Supported 00:24:35.091 Fixed Capacity Management: Not Supported 00:24:35.091 Variable Capacity Management: Not Supported 00:24:35.091 Delete Endurance Group: Not Supported 00:24:35.091 Delete NVM Set: Not Supported 00:24:35.091 Extended LBA Formats Supported: Not Supported 00:24:35.091 Flexible Data Placement Supported: Not Supported 00:24:35.091 00:24:35.091 Controller Memory Buffer Support 00:24:35.091 ================================ 00:24:35.091 Supported: No 00:24:35.091 00:24:35.091 Persistent Memory Region Support 00:24:35.091 ================================ 00:24:35.091 Supported: No 00:24:35.091 00:24:35.091 Admin Command Set Attributes 00:24:35.091 ============================ 00:24:35.091 Security Send/Receive: Not Supported 00:24:35.091 Format NVM: Not Supported 00:24:35.091 Firmware Activate/Download: Not Supported 00:24:35.091 Namespace Management: Not Supported 00:24:35.091 Device Self-Test: Not Supported 00:24:35.091 Directives: Not Supported 00:24:35.091 NVMe-MI: Not Supported 00:24:35.091 Virtualization Management: Not Supported 00:24:35.091 Doorbell Buffer Config: Not Supported 00:24:35.091 Get LBA Status Capability: Not Supported 00:24:35.091 Command & Feature Lockdown Capability: Not Supported 00:24:35.091 Abort Command Limit: 1 00:24:35.091 Async Event Request Limit: 4 00:24:35.091 Number of Firmware Slots: N/A 00:24:35.091 Firmware Slot 1 Read-Only: N/A 00:24:35.091 Firmware Activation Without Reset: N/A 00:24:35.091 Multiple Update Detection Support: N/A 00:24:35.091 Firmware Update Granularity: No Information Provided 00:24:35.091 Per-Namespace SMART Log: No 00:24:35.091 Asymmetric Namespace Access Log Page: Not Supported 00:24:35.091 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:35.091 Command Effects Log Page: Not Supported 00:24:35.091 Get Log Page Extended Data: Supported 00:24:35.091 Telemetry Log Pages: Not Supported 00:24:35.091 Persistent Event Log Pages: Not Supported 00:24:35.091 Supported Log Pages Log Page: May Support 00:24:35.091 Commands Supported & Effects Log Page: Not Supported 00:24:35.091 Feature Identifiers & Effects Log Page:May Support 00:24:35.091 NVMe-MI Commands & Effects Log Page: May Support 00:24:35.091 Data Area 4 for Telemetry Log: Not Supported 00:24:35.091 Error Log Page Entries Supported: 128 00:24:35.091 Keep Alive: Not Supported 00:24:35.091 00:24:35.091 NVM Command Set Attributes 00:24:35.091 ========================== 00:24:35.091 Submission Queue Entry Size 00:24:35.091 Max: 1 00:24:35.091 Min: 1 00:24:35.091 Completion Queue Entry Size 00:24:35.091 Max: 1 00:24:35.091 Min: 1 00:24:35.091 Number of Namespaces: 0 00:24:35.091 Compare Command: Not Supported 00:24:35.091 Write Uncorrectable Command: Not Supported 00:24:35.091 Dataset Management Command: Not Supported 00:24:35.091 Write Zeroes Command: Not Supported 00:24:35.091 Set Features Save Field: Not Supported 00:24:35.091 Reservations: Not Supported 00:24:35.091 Timestamp: Not Supported 00:24:35.091 Copy: Not Supported 00:24:35.091 Volatile Write Cache: Not Present 00:24:35.091 Atomic Write Unit (Normal): 1 00:24:35.091 Atomic Write Unit (PFail): 1 00:24:35.091 Atomic Compare & Write Unit: 1 00:24:35.091 Fused Compare & Write: Supported 00:24:35.091 Scatter-Gather List 00:24:35.091 SGL Command Set: Supported 00:24:35.091 SGL Keyed: Supported 00:24:35.091 SGL Bit Bucket Descriptor: Not Supported 00:24:35.091 SGL Metadata Pointer: Not Supported 00:24:35.091 Oversized SGL: Not Supported 00:24:35.091 SGL Metadata Address: Not Supported 00:24:35.091 SGL Offset: Supported 00:24:35.091 Transport SGL Data Block: Not Supported 00:24:35.091 Replay Protected Memory Block: Not Supported 00:24:35.091 00:24:35.091 Firmware Slot Information 00:24:35.091 ========================= 00:24:35.091 Active slot: 0 00:24:35.091 00:24:35.091 00:24:35.091 Error Log 00:24:35.091 ========= 00:24:35.091 00:24:35.091 Active Namespaces 00:24:35.091 ================= 00:24:35.091 Discovery Log Page 00:24:35.091 ================== 00:24:35.091 Generation Counter: 2 00:24:35.091 Number of Records: 2 00:24:35.091 Record Format: 0 00:24:35.091 00:24:35.091 Discovery Log Entry 0 00:24:35.091 ---------------------- 00:24:35.091 Transport Type: 3 (TCP) 00:24:35.091 Address Family: 1 (IPv4) 00:24:35.091 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:35.091 Entry Flags: 00:24:35.091 Duplicate Returned Information: 1 00:24:35.091 Explicit Persistent Connection Support for Discovery: 1 00:24:35.091 Transport Requirements: 00:24:35.091 Secure Channel: Not Required 00:24:35.091 Port ID: 0 (0x0000) 00:24:35.091 Controller ID: 65535 (0xffff) 00:24:35.091 Admin Max SQ Size: 128 00:24:35.091 Transport Service Identifier: 4420 00:24:35.091 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:35.091 Transport Address: 10.0.0.2 00:24:35.091 Discovery Log Entry 1 00:24:35.091 ---------------------- 00:24:35.091 Transport Type: 3 (TCP) 00:24:35.091 Address Family: 1 (IPv4) 00:24:35.091 Subsystem Type: 2 (NVM Subsystem) 00:24:35.091 Entry Flags: 00:24:35.091 Duplicate Returned Information: 0 00:24:35.091 Explicit Persistent Connection Support for Discovery: 0 00:24:35.091 Transport Requirements: 00:24:35.091 Secure Channel: Not Required 00:24:35.091 Port ID: 0 (0x0000) 00:24:35.091 Controller ID: 65535 (0xffff) 00:24:35.091 Admin Max SQ Size: 128 00:24:35.091 Transport Service Identifier: 4420 00:24:35.091 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:35.091 Transport Address: 10.0.0.2 [2024-07-16 00:35:48.522503] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:35.091 [2024-07-16 00:35:48.522514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1015fc0) on tqpair=0xf92ec0 00:24:35.091 [2024-07-16 00:35:48.522520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-07-16 00:35:48.522526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016140) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.522530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-07-16 00:35:48.522535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10162c0) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.522540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-07-16 00:35:48.522545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.522549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-07-16 00:35:48.522558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.522573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.522586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.522671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.522677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.522681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.522692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.522706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.522718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.522903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.522909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.522913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.522921] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:35.092 [2024-07-16 00:35:48.522926] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:35.092 [2024-07-16 00:35:48.522935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.522942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.522951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.522961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.523173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.523179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.523183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.523197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.523211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.523220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.523424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.523431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.523434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.523447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.523461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.523471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.523656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.523662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.523665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.523678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.523692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.523702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.523886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.523892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.523895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.523908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.523916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.523922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.523934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.524157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.524163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.524166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.524170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.524179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.524183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.524186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf92ec0) 00:24:35.092 [2024-07-16 00:35:48.524193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-07-16 00:35:48.524203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1016440, cid 3, qid 0 00:24:35.092 [2024-07-16 00:35:48.528237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.092 [2024-07-16 00:35:48.528244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.092 [2024-07-16 00:35:48.528248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.092 [2024-07-16 00:35:48.528252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1016440) on tqpair=0xf92ec0 00:24:35.092 [2024-07-16 00:35:48.528259] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:35.092 00:24:35.092 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:35.092 [2024-07-16 00:35:48.566561] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:35.093 [2024-07-16 00:35:48.566606] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192740 ] 00:24:35.093 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.093 [2024-07-16 00:35:48.598784] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:35.093 [2024-07-16 00:35:48.598822] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:35.093 [2024-07-16 00:35:48.598828] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:35.093 [2024-07-16 00:35:48.598839] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:35.093 [2024-07-16 00:35:48.598845] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:35.093 [2024-07-16 00:35:48.602433] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:35.093 [2024-07-16 00:35:48.602462] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d32ec0 0 00:24:35.093 [2024-07-16 00:35:48.610236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:35.093 [2024-07-16 00:35:48.610248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:35.093 [2024-07-16 00:35:48.610253] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:35.093 [2024-07-16 00:35:48.610256] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:35.093 [2024-07-16 00:35:48.610280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.610290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.610294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.093 [2024-07-16 00:35:48.610306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:35.093 [2024-07-16 00:35:48.610321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.093 [2024-07-16 00:35:48.618240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.093 [2024-07-16 00:35:48.618249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.093 [2024-07-16 00:35:48.618252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.093 [2024-07-16 00:35:48.618268] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:35.093 [2024-07-16 00:35:48.618274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:35.093 [2024-07-16 00:35:48.618279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:35.093 [2024-07-16 00:35:48.618291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.093 [2024-07-16 00:35:48.618306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-07-16 00:35:48.618320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.093 [2024-07-16 00:35:48.618596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.093 [2024-07-16 00:35:48.618603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.093 [2024-07-16 00:35:48.618606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.093 [2024-07-16 00:35:48.618615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:35.093 [2024-07-16 00:35:48.618623] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:35.093 [2024-07-16 00:35:48.618629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.093 [2024-07-16 00:35:48.618643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-07-16 00:35:48.618654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.093 [2024-07-16 00:35:48.618892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.093 [2024-07-16 00:35:48.618899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.093 [2024-07-16 00:35:48.618902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.093 [2024-07-16 00:35:48.618911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:35.093 [2024-07-16 00:35:48.618918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:35.093 [2024-07-16 00:35:48.618925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.618934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.093 [2024-07-16 00:35:48.618941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-07-16 00:35:48.618951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.093 [2024-07-16 00:35:48.619043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.093 [2024-07-16 00:35:48.619050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.093 [2024-07-16 00:35:48.619053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.093 [2024-07-16 00:35:48.619062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:35.093 [2024-07-16 00:35:48.619071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.093 [2024-07-16 00:35:48.619085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-07-16 00:35:48.619095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.093 [2024-07-16 00:35:48.619160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.093 [2024-07-16 00:35:48.619167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.093 [2024-07-16 00:35:48.619170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.093 [2024-07-16 00:35:48.619178] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:35.093 [2024-07-16 00:35:48.619183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:35.093 [2024-07-16 00:35:48.619190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:35.093 [2024-07-16 00:35:48.619295] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:35.093 [2024-07-16 00:35:48.619300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:35.093 [2024-07-16 00:35:48.619307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.093 [2024-07-16 00:35:48.619321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-07-16 00:35:48.619331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.093 [2024-07-16 00:35:48.619613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.093 [2024-07-16 00:35:48.619619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.093 [2024-07-16 00:35:48.619622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.093 [2024-07-16 00:35:48.619631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:35.093 [2024-07-16 00:35:48.619640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.093 [2024-07-16 00:35:48.619649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.093 [2024-07-16 00:35:48.619656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-07-16 00:35:48.619666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.093 [2024-07-16 00:35:48.619863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.093 [2024-07-16 00:35:48.619869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.093 [2024-07-16 00:35:48.619872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.619876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.094 [2024-07-16 00:35:48.619881] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:35.094 [2024-07-16 00:35:48.619885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.619893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:35.094 [2024-07-16 00:35:48.619900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.619909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.619912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.619919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-07-16 00:35:48.619929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.094 [2024-07-16 00:35:48.620166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.094 [2024-07-16 00:35:48.620172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.094 [2024-07-16 00:35:48.620176] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.620180] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=4096, cccid=0 00:24:35.094 [2024-07-16 00:35:48.620184] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db5fc0) on tqpair(0x1d32ec0): expected_datao=0, payload_size=4096 00:24:35.094 [2024-07-16 00:35:48.620189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.620218] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.620222] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.094 [2024-07-16 00:35:48.661345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.094 [2024-07-16 00:35:48.661348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.094 [2024-07-16 00:35:48.661360] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:35.094 [2024-07-16 00:35:48.661365] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:35.094 [2024-07-16 00:35:48.661369] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:35.094 [2024-07-16 00:35:48.661373] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:35.094 [2024-07-16 00:35:48.661377] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:35.094 [2024-07-16 00:35:48.661382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.661393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.661402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.661418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:35.094 [2024-07-16 00:35:48.661430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.094 [2024-07-16 00:35:48.661634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.094 [2024-07-16 00:35:48.661641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.094 [2024-07-16 00:35:48.661644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.094 [2024-07-16 00:35:48.661654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.661668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.094 [2024-07-16 00:35:48.661674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.661687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.094 [2024-07-16 00:35:48.661693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.661706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.094 [2024-07-16 00:35:48.661712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.661724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.094 [2024-07-16 00:35:48.661729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.661740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.661746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.661750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.661756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-07-16 00:35:48.661768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db5fc0, cid 0, qid 0 00:24:35.094 [2024-07-16 00:35:48.661773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6140, cid 1, qid 0 00:24:35.094 [2024-07-16 00:35:48.661780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db62c0, cid 2, qid 0 00:24:35.094 [2024-07-16 00:35:48.661784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6440, cid 3, qid 0 00:24:35.094 [2024-07-16 00:35:48.661789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db65c0, cid 4, qid 0 00:24:35.094 [2024-07-16 00:35:48.662018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.094 [2024-07-16 00:35:48.662024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.094 [2024-07-16 00:35:48.662027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.662031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db65c0) on tqpair=0x1d32ec0 00:24:35.094 [2024-07-16 00:35:48.662036] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:35.094 [2024-07-16 00:35:48.662040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.662050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.662056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.662062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.662066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.662070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d32ec0) 00:24:35.094 [2024-07-16 00:35:48.662076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:35.094 [2024-07-16 00:35:48.662086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db65c0, cid 4, qid 0 00:24:35.094 [2024-07-16 00:35:48.666236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.094 [2024-07-16 00:35:48.666243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.094 [2024-07-16 00:35:48.666247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.094 [2024-07-16 00:35:48.666251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db65c0) on tqpair=0x1d32ec0 00:24:35.094 [2024-07-16 00:35:48.666317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.666327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:35.094 [2024-07-16 00:35:48.666334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d32ec0) 00:24:35.095 [2024-07-16 00:35:48.666344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-07-16 00:35:48.666356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db65c0, cid 4, qid 0 00:24:35.095 [2024-07-16 00:35:48.666540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.095 [2024-07-16 00:35:48.666547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.095 [2024-07-16 00:35:48.666550] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666554] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=4096, cccid=4 00:24:35.095 [2024-07-16 00:35:48.666558] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db65c0) on tqpair(0x1d32ec0): expected_datao=0, payload_size=4096 00:24:35.095 [2024-07-16 00:35:48.666562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666571] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666575] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.095 [2024-07-16 00:35:48.666756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.095 [2024-07-16 00:35:48.666759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db65c0) on tqpair=0x1d32ec0 00:24:35.095 [2024-07-16 00:35:48.666772] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:35.095 [2024-07-16 00:35:48.666786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.666794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.666801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d32ec0) 00:24:35.095 [2024-07-16 00:35:48.666811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-07-16 00:35:48.666822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db65c0, cid 4, qid 0 00:24:35.095 [2024-07-16 00:35:48.666903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.095 [2024-07-16 00:35:48.666910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.095 [2024-07-16 00:35:48.666913] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=4096, cccid=4 00:24:35.095 [2024-07-16 00:35:48.666921] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db65c0) on tqpair(0x1d32ec0): expected_datao=0, payload_size=4096 00:24:35.095 [2024-07-16 00:35:48.666925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666932] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.666935] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.095 [2024-07-16 00:35:48.667110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.095 [2024-07-16 00:35:48.667113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db65c0) on tqpair=0x1d32ec0 00:24:35.095 [2024-07-16 00:35:48.667129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667138] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d32ec0) 00:24:35.095 [2024-07-16 00:35:48.667155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-07-16 00:35:48.667165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db65c0, cid 4, qid 0 00:24:35.095 [2024-07-16 00:35:48.667245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.095 [2024-07-16 00:35:48.667251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.095 [2024-07-16 00:35:48.667255] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667258] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=4096, cccid=4 00:24:35.095 [2024-07-16 00:35:48.667265] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db65c0) on tqpair(0x1d32ec0): expected_datao=0, payload_size=4096 00:24:35.095 [2024-07-16 00:35:48.667269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667275] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667279] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.095 [2024-07-16 00:35:48.667411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.095 [2024-07-16 00:35:48.667414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.095 [2024-07-16 00:35:48.667418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db65c0) on tqpair=0x1d32ec0 00:24:35.095 [2024-07-16 00:35:48.667425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:35.095 [2024-07-16 00:35:48.667461] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:35.095 [2024-07-16 00:35:48.667465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:35.096 [2024-07-16 00:35:48.667470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:35.096 [2024-07-16 00:35:48.667484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.667487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.667494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.667501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.667504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.667508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.667514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.096 [2024-07-16 00:35:48.667527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db65c0, cid 4, qid 0 00:24:35.096 [2024-07-16 00:35:48.667532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6740, cid 5, qid 0 00:24:35.096 [2024-07-16 00:35:48.667712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.096 [2024-07-16 00:35:48.667718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.096 [2024-07-16 00:35:48.667722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.667726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db65c0) on tqpair=0x1d32ec0 00:24:35.096 [2024-07-16 00:35:48.667732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.096 [2024-07-16 00:35:48.667738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.096 [2024-07-16 00:35:48.667743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.667747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6740) on tqpair=0x1d32ec0 00:24:35.096 [2024-07-16 00:35:48.667756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.667760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.667766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.667776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6740, cid 5, qid 0 00:24:35.096 [2024-07-16 00:35:48.668015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.096 [2024-07-16 00:35:48.668022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.096 [2024-07-16 00:35:48.668025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6740) on tqpair=0x1d32ec0 00:24:35.096 [2024-07-16 00:35:48.668038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.668048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.668057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6740, cid 5, qid 0 00:24:35.096 [2024-07-16 00:35:48.668247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.096 [2024-07-16 00:35:48.668253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.096 [2024-07-16 00:35:48.668257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6740) on tqpair=0x1d32ec0 00:24:35.096 [2024-07-16 00:35:48.668269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.668279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.668289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6740, cid 5, qid 0 00:24:35.096 [2024-07-16 00:35:48.668519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.096 [2024-07-16 00:35:48.668525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.096 [2024-07-16 00:35:48.668528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6740) on tqpair=0x1d32ec0 00:24:35.096 [2024-07-16 00:35:48.668545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.668556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.668563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.668573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.668580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.668590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.668599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d32ec0) 00:24:35.096 [2024-07-16 00:35:48.668608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-07-16 00:35:48.668619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6740, cid 5, qid 0 00:24:35.096 [2024-07-16 00:35:48.668625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db65c0, cid 4, qid 0 00:24:35.096 [2024-07-16 00:35:48.668629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db68c0, cid 6, qid 0 00:24:35.096 [2024-07-16 00:35:48.668634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6a40, cid 7, qid 0 00:24:35.096 [2024-07-16 00:35:48.668928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.096 [2024-07-16 00:35:48.668934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.096 [2024-07-16 00:35:48.668937] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.668941] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=8192, cccid=5 00:24:35.096 [2024-07-16 00:35:48.668945] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db6740) on tqpair(0x1d32ec0): expected_datao=0, payload_size=8192 00:24:35.096 [2024-07-16 00:35:48.668949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669009] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669013] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.096 [2024-07-16 00:35:48.669025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.096 [2024-07-16 00:35:48.669028] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669031] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=512, cccid=4 00:24:35.096 [2024-07-16 00:35:48.669036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db65c0) on tqpair(0x1d32ec0): expected_datao=0, payload_size=512 00:24:35.096 [2024-07-16 00:35:48.669040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669046] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669050] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.096 [2024-07-16 00:35:48.669061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.096 [2024-07-16 00:35:48.669064] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669068] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=512, cccid=6 00:24:35.096 [2024-07-16 00:35:48.669072] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db68c0) on tqpair(0x1d32ec0): expected_datao=0, payload_size=512 00:24:35.096 [2024-07-16 00:35:48.669076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669082] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669086] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.096 [2024-07-16 00:35:48.669097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.096 [2024-07-16 00:35:48.669100] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.096 [2024-07-16 00:35:48.669104] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d32ec0): datao=0, datal=4096, cccid=7 00:24:35.096 [2024-07-16 00:35:48.669108] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1db6a40) on tqpair(0x1d32ec0): expected_datao=0, payload_size=4096 00:24:35.096 [2024-07-16 00:35:48.669114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.097 [2024-07-16 00:35:48.669121] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.097 [2024-07-16 00:35:48.669124] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.097 [2024-07-16 00:35:48.669148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.097 [2024-07-16 00:35:48.669154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.097 [2024-07-16 00:35:48.669158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.097 [2024-07-16 00:35:48.669162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6740) on tqpair=0x1d32ec0 00:24:35.097 [2024-07-16 00:35:48.669173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.097 [2024-07-16 00:35:48.669180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.097 [2024-07-16 00:35:48.669183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.097 [2024-07-16 00:35:48.669187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db65c0) on tqpair=0x1d32ec0 00:24:35.097 [2024-07-16 00:35:48.669197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.097 [2024-07-16 00:35:48.669202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.097 [2024-07-16 00:35:48.669206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.097 [2024-07-16 00:35:48.669209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db68c0) on tqpair=0x1d32ec0 00:24:35.097 [2024-07-16 00:35:48.669216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.097 [2024-07-16 00:35:48.669222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.097 [2024-07-16 00:35:48.669225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.097 [2024-07-16 00:35:48.669233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6a40) on tqpair=0x1d32ec0 00:24:35.097 ===================================================== 00:24:35.097 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.097 ===================================================== 00:24:35.097 Controller Capabilities/Features 00:24:35.097 ================================ 00:24:35.097 Vendor ID: 8086 00:24:35.097 Subsystem Vendor ID: 8086 00:24:35.097 Serial Number: SPDK00000000000001 00:24:35.097 Model Number: SPDK bdev Controller 00:24:35.097 Firmware Version: 24.09 00:24:35.097 Recommended Arb Burst: 6 00:24:35.097 IEEE OUI Identifier: e4 d2 5c 00:24:35.097 Multi-path I/O 00:24:35.097 May have multiple subsystem ports: Yes 00:24:35.097 May have multiple controllers: Yes 00:24:35.097 Associated with SR-IOV VF: No 00:24:35.097 Max Data Transfer Size: 131072 00:24:35.097 Max Number of Namespaces: 32 00:24:35.097 Max Number of I/O Queues: 127 00:24:35.097 NVMe Specification Version (VS): 1.3 00:24:35.097 NVMe Specification Version (Identify): 1.3 00:24:35.097 Maximum Queue Entries: 128 00:24:35.097 Contiguous Queues Required: Yes 00:24:35.097 Arbitration Mechanisms Supported 00:24:35.097 Weighted Round Robin: Not Supported 00:24:35.097 Vendor Specific: Not Supported 00:24:35.097 Reset Timeout: 15000 ms 00:24:35.097 Doorbell Stride: 4 bytes 00:24:35.097 NVM Subsystem Reset: Not Supported 00:24:35.097 Command Sets Supported 00:24:35.097 NVM Command Set: Supported 00:24:35.097 Boot Partition: Not Supported 00:24:35.097 Memory Page Size Minimum: 4096 bytes 00:24:35.097 Memory Page Size Maximum: 4096 bytes 00:24:35.097 Persistent Memory Region: Not Supported 00:24:35.097 Optional Asynchronous Events Supported 00:24:35.097 Namespace Attribute Notices: Supported 00:24:35.097 Firmware Activation Notices: Not Supported 00:24:35.097 ANA Change Notices: Not Supported 00:24:35.097 PLE Aggregate Log Change Notices: Not Supported 00:24:35.097 LBA Status Info Alert Notices: Not Supported 00:24:35.097 EGE Aggregate Log Change Notices: Not Supported 00:24:35.097 Normal NVM Subsystem Shutdown event: Not Supported 00:24:35.097 Zone Descriptor Change Notices: Not Supported 00:24:35.097 Discovery Log Change Notices: Not Supported 00:24:35.097 Controller Attributes 00:24:35.097 128-bit Host Identifier: Supported 00:24:35.097 Non-Operational Permissive Mode: Not Supported 00:24:35.097 NVM Sets: Not Supported 00:24:35.097 Read Recovery Levels: Not Supported 00:24:35.097 Endurance Groups: Not Supported 00:24:35.097 Predictable Latency Mode: Not Supported 00:24:35.097 Traffic Based Keep ALive: Not Supported 00:24:35.097 Namespace Granularity: Not Supported 00:24:35.097 SQ Associations: Not Supported 00:24:35.097 UUID List: Not Supported 00:24:35.097 Multi-Domain Subsystem: Not Supported 00:24:35.097 Fixed Capacity Management: Not Supported 00:24:35.097 Variable Capacity Management: Not Supported 00:24:35.097 Delete Endurance Group: Not Supported 00:24:35.097 Delete NVM Set: Not Supported 00:24:35.097 Extended LBA Formats Supported: Not Supported 00:24:35.097 Flexible Data Placement Supported: Not Supported 00:24:35.097 00:24:35.097 Controller Memory Buffer Support 00:24:35.097 ================================ 00:24:35.097 Supported: No 00:24:35.097 00:24:35.097 Persistent Memory Region Support 00:24:35.097 ================================ 00:24:35.097 Supported: No 00:24:35.097 00:24:35.097 Admin Command Set Attributes 00:24:35.097 ============================ 00:24:35.097 Security Send/Receive: Not Supported 00:24:35.097 Format NVM: Not Supported 00:24:35.097 Firmware Activate/Download: Not Supported 00:24:35.097 Namespace Management: Not Supported 00:24:35.097 Device Self-Test: Not Supported 00:24:35.097 Directives: Not Supported 00:24:35.097 NVMe-MI: Not Supported 00:24:35.097 Virtualization Management: Not Supported 00:24:35.097 Doorbell Buffer Config: Not Supported 00:24:35.097 Get LBA Status Capability: Not Supported 00:24:35.097 Command & Feature Lockdown Capability: Not Supported 00:24:35.097 Abort Command Limit: 4 00:24:35.097 Async Event Request Limit: 4 00:24:35.097 Number of Firmware Slots: N/A 00:24:35.097 Firmware Slot 1 Read-Only: N/A 00:24:35.097 Firmware Activation Without Reset: N/A 00:24:35.097 Multiple Update Detection Support: N/A 00:24:35.097 Firmware Update Granularity: No Information Provided 00:24:35.097 Per-Namespace SMART Log: No 00:24:35.097 Asymmetric Namespace Access Log Page: Not Supported 00:24:35.097 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:35.097 Command Effects Log Page: Supported 00:24:35.097 Get Log Page Extended Data: Supported 00:24:35.097 Telemetry Log Pages: Not Supported 00:24:35.097 Persistent Event Log Pages: Not Supported 00:24:35.097 Supported Log Pages Log Page: May Support 00:24:35.097 Commands Supported & Effects Log Page: Not Supported 00:24:35.097 Feature Identifiers & Effects Log Page:May Support 00:24:35.097 NVMe-MI Commands & Effects Log Page: May Support 00:24:35.097 Data Area 4 for Telemetry Log: Not Supported 00:24:35.097 Error Log Page Entries Supported: 128 00:24:35.097 Keep Alive: Supported 00:24:35.097 Keep Alive Granularity: 10000 ms 00:24:35.097 00:24:35.097 NVM Command Set Attributes 00:24:35.097 ========================== 00:24:35.097 Submission Queue Entry Size 00:24:35.097 Max: 64 00:24:35.097 Min: 64 00:24:35.097 Completion Queue Entry Size 00:24:35.097 Max: 16 00:24:35.097 Min: 16 00:24:35.097 Number of Namespaces: 32 00:24:35.097 Compare Command: Supported 00:24:35.097 Write Uncorrectable Command: Not Supported 00:24:35.097 Dataset Management Command: Supported 00:24:35.097 Write Zeroes Command: Supported 00:24:35.097 Set Features Save Field: Not Supported 00:24:35.097 Reservations: Supported 00:24:35.098 Timestamp: Not Supported 00:24:35.098 Copy: Supported 00:24:35.098 Volatile Write Cache: Present 00:24:35.098 Atomic Write Unit (Normal): 1 00:24:35.098 Atomic Write Unit (PFail): 1 00:24:35.098 Atomic Compare & Write Unit: 1 00:24:35.098 Fused Compare & Write: Supported 00:24:35.098 Scatter-Gather List 00:24:35.098 SGL Command Set: Supported 00:24:35.098 SGL Keyed: Supported 00:24:35.098 SGL Bit Bucket Descriptor: Not Supported 00:24:35.098 SGL Metadata Pointer: Not Supported 00:24:35.098 Oversized SGL: Not Supported 00:24:35.098 SGL Metadata Address: Not Supported 00:24:35.098 SGL Offset: Supported 00:24:35.098 Transport SGL Data Block: Not Supported 00:24:35.098 Replay Protected Memory Block: Not Supported 00:24:35.098 00:24:35.098 Firmware Slot Information 00:24:35.098 ========================= 00:24:35.098 Active slot: 1 00:24:35.098 Slot 1 Firmware Revision: 24.09 00:24:35.098 00:24:35.098 00:24:35.098 Commands Supported and Effects 00:24:35.098 ============================== 00:24:35.098 Admin Commands 00:24:35.098 -------------- 00:24:35.098 Get Log Page (02h): Supported 00:24:35.098 Identify (06h): Supported 00:24:35.098 Abort (08h): Supported 00:24:35.098 Set Features (09h): Supported 00:24:35.098 Get Features (0Ah): Supported 00:24:35.098 Asynchronous Event Request (0Ch): Supported 00:24:35.098 Keep Alive (18h): Supported 00:24:35.098 I/O Commands 00:24:35.098 ------------ 00:24:35.098 Flush (00h): Supported LBA-Change 00:24:35.098 Write (01h): Supported LBA-Change 00:24:35.098 Read (02h): Supported 00:24:35.098 Compare (05h): Supported 00:24:35.098 Write Zeroes (08h): Supported LBA-Change 00:24:35.098 Dataset Management (09h): Supported LBA-Change 00:24:35.098 Copy (19h): Supported LBA-Change 00:24:35.098 00:24:35.098 Error Log 00:24:35.098 ========= 00:24:35.098 00:24:35.098 Arbitration 00:24:35.098 =========== 00:24:35.098 Arbitration Burst: 1 00:24:35.098 00:24:35.098 Power Management 00:24:35.098 ================ 00:24:35.098 Number of Power States: 1 00:24:35.098 Current Power State: Power State #0 00:24:35.098 Power State #0: 00:24:35.098 Max Power: 0.00 W 00:24:35.098 Non-Operational State: Operational 00:24:35.098 Entry Latency: Not Reported 00:24:35.098 Exit Latency: Not Reported 00:24:35.098 Relative Read Throughput: 0 00:24:35.098 Relative Read Latency: 0 00:24:35.098 Relative Write Throughput: 0 00:24:35.098 Relative Write Latency: 0 00:24:35.098 Idle Power: Not Reported 00:24:35.098 Active Power: Not Reported 00:24:35.098 Non-Operational Permissive Mode: Not Supported 00:24:35.098 00:24:35.098 Health Information 00:24:35.098 ================== 00:24:35.098 Critical Warnings: 00:24:35.098 Available Spare Space: OK 00:24:35.098 Temperature: OK 00:24:35.098 Device Reliability: OK 00:24:35.098 Read Only: No 00:24:35.098 Volatile Memory Backup: OK 00:24:35.098 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:35.098 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:35.098 Available Spare: 0% 00:24:35.098 Available Spare Threshold: 0% 00:24:35.098 Life Percentage Used:[2024-07-16 00:35:48.669330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.669335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d32ec0) 00:24:35.098 [2024-07-16 00:35:48.669342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-07-16 00:35:48.669353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6a40, cid 7, qid 0 00:24:35.098 [2024-07-16 00:35:48.669486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.098 [2024-07-16 00:35:48.669492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.098 [2024-07-16 00:35:48.669495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.669499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6a40) on tqpair=0x1d32ec0 00:24:35.098 [2024-07-16 00:35:48.669528] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:35.098 [2024-07-16 00:35:48.669537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db5fc0) on tqpair=0x1d32ec0 00:24:35.098 [2024-07-16 00:35:48.669543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-07-16 00:35:48.669548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6140) on tqpair=0x1d32ec0 00:24:35.098 [2024-07-16 00:35:48.669553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-07-16 00:35:48.669558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db62c0) on tqpair=0x1d32ec0 00:24:35.098 [2024-07-16 00:35:48.669562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-07-16 00:35:48.669567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6440) on tqpair=0x1d32ec0 00:24:35.098 [2024-07-16 00:35:48.669572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-07-16 00:35:48.669581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.669585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.669589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d32ec0) 00:24:35.098 [2024-07-16 00:35:48.669596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-07-16 00:35:48.669607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6440, cid 3, qid 0 00:24:35.098 [2024-07-16 00:35:48.669786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.098 [2024-07-16 00:35:48.669792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.098 [2024-07-16 00:35:48.669796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.669799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6440) on tqpair=0x1d32ec0 00:24:35.098 [2024-07-16 00:35:48.669806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.669810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.669813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d32ec0) 00:24:35.098 [2024-07-16 00:35:48.669820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-07-16 00:35:48.669832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6440, cid 3, qid 0 00:24:35.098 [2024-07-16 00:35:48.670089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.098 [2024-07-16 00:35:48.670095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.098 [2024-07-16 00:35:48.670099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.670102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6440) on tqpair=0x1d32ec0 00:24:35.098 [2024-07-16 00:35:48.670107] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:35.098 [2024-07-16 00:35:48.670112] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:35.098 [2024-07-16 00:35:48.670121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.670125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.098 [2024-07-16 00:35:48.670128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d32ec0) 00:24:35.098 [2024-07-16 00:35:48.670135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-07-16 00:35:48.670144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1db6440, cid 3, qid 0 00:24:35.099 [2024-07-16 00:35:48.674237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.099 [2024-07-16 00:35:48.674246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.099 [2024-07-16 00:35:48.674249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.099 [2024-07-16 00:35:48.674253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1db6440) on tqpair=0x1d32ec0 00:24:35.099 [2024-07-16 00:35:48.674261] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:24:35.099 0% 00:24:35.099 Data Units Read: 0 00:24:35.099 Data Units Written: 0 00:24:35.099 Host Read Commands: 0 00:24:35.099 Host Write Commands: 0 00:24:35.099 Controller Busy Time: 0 minutes 00:24:35.099 Power Cycles: 0 00:24:35.099 Power On Hours: 0 hours 00:24:35.099 Unsafe Shutdowns: 0 00:24:35.099 Unrecoverable Media Errors: 0 00:24:35.099 Lifetime Error Log Entries: 0 00:24:35.099 Warning Temperature Time: 0 minutes 00:24:35.099 Critical Temperature Time: 0 minutes 00:24:35.099 00:24:35.099 Number of Queues 00:24:35.099 ================ 00:24:35.099 Number of I/O Submission Queues: 127 00:24:35.099 Number of I/O Completion Queues: 127 00:24:35.099 00:24:35.099 Active Namespaces 00:24:35.099 ================= 00:24:35.099 Namespace ID:1 00:24:35.099 Error Recovery Timeout: Unlimited 00:24:35.099 Command Set Identifier: NVM (00h) 00:24:35.099 Deallocate: Supported 00:24:35.099 Deallocated/Unwritten Error: Not Supported 00:24:35.099 Deallocated Read Value: Unknown 00:24:35.099 Deallocate in Write Zeroes: Not Supported 00:24:35.099 Deallocated Guard Field: 0xFFFF 00:24:35.099 Flush: Supported 00:24:35.099 Reservation: Supported 00:24:35.099 Namespace Sharing Capabilities: Multiple Controllers 00:24:35.099 Size (in LBAs): 131072 (0GiB) 00:24:35.099 Capacity (in LBAs): 131072 (0GiB) 00:24:35.099 Utilization (in LBAs): 131072 (0GiB) 00:24:35.099 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:35.099 EUI64: ABCDEF0123456789 00:24:35.099 UUID: cade7895-6787-4a8e-920c-ed2bfab0b69f 00:24:35.099 Thin Provisioning: Not Supported 00:24:35.099 Per-NS Atomic Units: Yes 00:24:35.099 Atomic Boundary Size (Normal): 0 00:24:35.099 Atomic Boundary Size (PFail): 0 00:24:35.099 Atomic Boundary Offset: 0 00:24:35.099 Maximum Single Source Range Length: 65535 00:24:35.099 Maximum Copy Length: 65535 00:24:35.099 Maximum Source Range Count: 1 00:24:35.099 NGUID/EUI64 Never Reused: No 00:24:35.099 Namespace Write Protected: No 00:24:35.099 Number of LBA Formats: 1 00:24:35.099 Current LBA Format: LBA Format #00 00:24:35.099 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:35.099 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.099 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.099 rmmod nvme_tcp 00:24:35.360 rmmod nvme_fabrics 00:24:35.360 rmmod nvme_keyring 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1192384 ']' 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1192384 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1192384 ']' 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1192384 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192384 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192384' 00:24:35.360 killing process with pid 1192384 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1192384 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1192384 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.360 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.361 00:35:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.361 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.361 00:35:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.931 00:35:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.931 00:24:37.931 real 0m12.120s 00:24:37.931 user 0m8.166s 00:24:37.931 sys 0m6.439s 00:24:37.931 00:35:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.931 00:35:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.931 ************************************ 00:24:37.931 END TEST nvmf_identify 00:24:37.931 ************************************ 00:24:37.931 00:35:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:37.931 00:35:51 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:37.931 00:35:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:37.931 00:35:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.931 00:35:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.931 ************************************ 00:24:37.931 START TEST nvmf_perf 00:24:37.931 ************************************ 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:37.931 * Looking for test storage... 00:24:37.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.931 00:35:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:37.932 00:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.075 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:46.076 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:46.076 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:46.076 Found net devices under 0000:31:00.0: cvl_0_0 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:46.076 Found net devices under 0000:31:00.1: cvl_0_1 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:24:46.076 00:24:46.076 --- 10.0.0.2 ping statistics --- 00:24:46.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.076 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:24:46.076 00:24:46.076 --- 10.0.0.1 ping statistics --- 00:24:46.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.076 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1197408 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1197408 00:24:46.076 00:35:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.077 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1197408 ']' 00:24:46.077 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.077 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.077 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.077 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.077 00:35:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:46.077 [2024-07-16 00:35:59.499287] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:46.077 [2024-07-16 00:35:59.499338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.077 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.077 [2024-07-16 00:35:59.572685] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.077 [2024-07-16 00:35:59.638849] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.077 [2024-07-16 00:35:59.638886] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.077 [2024-07-16 00:35:59.638893] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.077 [2024-07-16 00:35:59.638900] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.077 [2024-07-16 00:35:59.638906] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.077 [2024-07-16 00:35:59.639049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.077 [2024-07-16 00:35:59.639191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.077 [2024-07-16 00:35:59.639354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.077 [2024-07-16 00:35:59.639354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.648 00:36:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.648 00:36:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:46.648 00:36:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.648 00:36:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.648 00:36:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:46.909 00:36:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.909 00:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:46.909 00:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:47.478 00:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:47.478 00:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:47.478 00:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:47.478 00:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:47.738 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:47.738 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:47.738 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:47.738 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:47.738 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:47.738 [2024-07-16 00:36:01.309979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.738 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.997 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:47.997 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.257 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:48.258 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:48.258 00:36:01 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.517 [2024-07-16 00:36:01.996547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.517 00:36:02 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:48.777 00:36:02 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:48.777 00:36:02 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:48.777 00:36:02 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:48.777 00:36:02 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:50.157 Initializing NVMe Controllers 00:24:50.157 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:50.158 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:50.158 Initialization complete. Launching workers. 00:24:50.158 ======================================================== 00:24:50.158 Latency(us) 00:24:50.158 Device Information : IOPS MiB/s Average min max 00:24:50.158 PCIE (0000:65:00.0) NSID 1 from core 0: 79940.57 312.27 399.55 13.25 5304.91 00:24:50.158 ======================================================== 00:24:50.158 Total : 79940.57 312.27 399.55 13.25 5304.91 00:24:50.158 00:24:50.158 00:36:03 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.158 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.538 Initializing NVMe Controllers 00:24:51.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:51.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:51.538 Initialization complete. Launching workers. 00:24:51.538 ======================================================== 00:24:51.538 Latency(us) 00:24:51.538 Device Information : IOPS MiB/s Average min max 00:24:51.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.94 0.35 11232.48 378.20 45703.65 00:24:51.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.96 0.25 16008.00 4997.23 47905.99 00:24:51.538 ======================================================== 00:24:51.538 Total : 154.90 0.61 13235.12 378.20 47905.99 00:24:51.538 00:24:51.538 00:36:04 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:51.538 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.479 Initializing NVMe Controllers 00:24:52.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:52.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:52.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:52.479 Initialization complete. Launching workers. 00:24:52.479 ======================================================== 00:24:52.479 Latency(us) 00:24:52.479 Device Information : IOPS MiB/s Average min max 00:24:52.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10830.73 42.31 2954.99 430.83 6652.01 00:24:52.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3799.91 14.84 8475.03 6937.62 16120.06 00:24:52.479 ======================================================== 00:24:52.479 Total : 14630.63 57.15 4388.67 430.83 16120.06 00:24:52.479 00:24:52.479 00:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:52.479 00:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:52.479 00:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:52.479 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.051 Initializing NVMe Controllers 00:24:55.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.051 Controller IO queue size 128, less than required. 00:24:55.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:55.051 Controller IO queue size 128, less than required. 00:24:55.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:55.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:55.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:55.051 Initialization complete. Launching workers. 00:24:55.051 ======================================================== 00:24:55.051 Latency(us) 00:24:55.051 Device Information : IOPS MiB/s Average min max 00:24:55.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 971.73 242.93 135721.17 71040.15 180972.55 00:24:55.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.03 147.26 222000.89 60425.76 337962.69 00:24:55.051 ======================================================== 00:24:55.051 Total : 1560.76 390.19 168283.21 60425.76 337962.69 00:24:55.051 00:24:55.051 00:36:08 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:55.051 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.051 No valid NVMe controllers or AIO or URING devices found 00:24:55.051 Initializing NVMe Controllers 00:24:55.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.051 Controller IO queue size 128, less than required. 00:24:55.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:55.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:55.051 Controller IO queue size 128, less than required. 00:24:55.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:55.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:55.051 WARNING: Some requested NVMe devices were skipped 00:24:55.312 00:36:08 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:55.312 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.878 Initializing NVMe Controllers 00:24:57.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:57.878 Controller IO queue size 128, less than required. 00:24:57.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:57.878 Controller IO queue size 128, less than required. 00:24:57.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:57.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:57.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:57.878 Initialization complete. Launching workers. 00:24:57.878 00:24:57.878 ==================== 00:24:57.878 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:57.878 TCP transport: 00:24:57.878 polls: 32407 00:24:57.878 idle_polls: 13170 00:24:57.878 sock_completions: 19237 00:24:57.878 nvme_completions: 4543 00:24:57.878 submitted_requests: 6938 00:24:57.878 queued_requests: 1 00:24:57.878 00:24:57.878 ==================== 00:24:57.878 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:57.878 TCP transport: 00:24:57.878 polls: 30757 00:24:57.878 idle_polls: 11288 00:24:57.878 sock_completions: 19469 00:24:57.878 nvme_completions: 4513 00:24:57.878 submitted_requests: 6728 00:24:57.878 queued_requests: 1 00:24:57.878 ======================================================== 00:24:57.878 Latency(us) 00:24:57.878 Device Information : IOPS MiB/s Average min max 00:24:57.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1135.49 283.87 115238.33 53862.56 173316.46 00:24:57.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1127.99 282.00 115873.70 59746.02 157020.23 00:24:57.879 ======================================================== 00:24:57.879 Total : 2263.48 565.87 115554.96 53862.56 173316.46 00:24:57.879 00:24:57.879 00:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:57.879 00:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.192 rmmod nvme_tcp 00:24:58.192 rmmod nvme_fabrics 00:24:58.192 rmmod nvme_keyring 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1197408 ']' 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1197408 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1197408 ']' 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1197408 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1197408 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1197408' 00:24:58.192 killing process with pid 1197408 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1197408 00:24:58.192 00:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1197408 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.126 00:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.671 00:36:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.671 00:25:02.671 real 0m24.593s 00:25:02.671 user 0m57.989s 00:25:02.671 sys 0m8.411s 00:25:02.671 00:36:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.671 00:36:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:02.671 ************************************ 00:25:02.671 END TEST nvmf_perf 00:25:02.671 ************************************ 00:25:02.671 00:36:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:02.671 00:36:15 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:02.671 00:36:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:02.671 00:36:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.671 00:36:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:02.671 ************************************ 00:25:02.671 START TEST nvmf_fio_host 00:25:02.671 ************************************ 00:25:02.671 00:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:02.671 * Looking for test storage... 00:25:02.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.671 00:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.671 00:36:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.671 00:36:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.671 00:36:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.671 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:02.672 00:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:10.827 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:10.827 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:10.827 Found net devices under 0000:31:00.0: cvl_0_0 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:10.827 Found net devices under 0000:31:00.1: cvl_0_1 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.827 00:36:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:25:10.827 00:25:10.827 --- 10.0.0.2 ping statistics --- 00:25:10.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.827 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:25:10.827 00:25:10.827 --- 10.0.0.1 ping statistics --- 00:25:10.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.827 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1205375 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1205375 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1205375 ']' 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.827 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.827 [2024-07-16 00:36:24.126801] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:10.827 [2024-07-16 00:36:24.126891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.827 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.827 [2024-07-16 00:36:24.205512] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:10.828 [2024-07-16 00:36:24.274332] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.828 [2024-07-16 00:36:24.274369] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.828 [2024-07-16 00:36:24.274376] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.828 [2024-07-16 00:36:24.274382] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.828 [2024-07-16 00:36:24.274388] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.828 [2024-07-16 00:36:24.274530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.828 [2024-07-16 00:36:24.274645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.828 [2024-07-16 00:36:24.274837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.828 [2024-07-16 00:36:24.274838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.396 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.396 00:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:11.396 00:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:11.655 [2024-07-16 00:36:25.032133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.655 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:11.655 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.655 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.655 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:11.655 Malloc1 00:25:11.655 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.914 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:12.175 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.175 [2024-07-16 00:36:25.741649] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.175 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:12.436 00:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:12.696 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:12.696 fio-3.35 00:25:12.696 Starting 1 thread 00:25:12.957 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.505 00:25:15.505 test: (groupid=0, jobs=1): err= 0: pid=1205969: Tue Jul 16 00:36:28 2024 00:25:15.505 read: IOPS=9670, BW=37.8MiB/s (39.6MB/s)(75.8MiB/2006msec) 00:25:15.505 slat (usec): min=2, max=223, avg= 2.19, stdev= 2.22 00:25:15.505 clat (usec): min=3407, max=11800, avg=7290.22, stdev=526.03 00:25:15.505 lat (usec): min=3435, max=11802, avg=7292.41, stdev=525.84 00:25:15.505 clat percentiles (usec): 00:25:15.505 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:25:15.505 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:25:15.505 | 70.00th=[ 7570], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:25:15.505 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[10945], 99.95th=[11600], 00:25:15.505 | 99.99th=[11731] 00:25:15.505 bw ( KiB/s): min=37405, max=39424, per=99.89%, avg=38641.25, stdev=914.48, samples=4 00:25:15.505 iops : min= 9351, max= 9856, avg=9660.25, stdev=228.73, samples=4 00:25:15.505 write: IOPS=9679, BW=37.8MiB/s (39.6MB/s)(75.9MiB/2006msec); 0 zone resets 00:25:15.505 slat (usec): min=2, max=209, avg= 2.28, stdev= 1.66 00:25:15.505 clat (usec): min=2316, max=11020, avg=5846.28, stdev=444.57 00:25:15.505 lat (usec): min=2331, max=11022, avg=5848.56, stdev=444.44 00:25:15.505 clat percentiles (usec): 00:25:15.505 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:25:15.505 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5932], 00:25:15.505 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6325], 95.00th=[ 6521], 00:25:15.505 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 9241], 99.95th=[10814], 00:25:15.505 | 99.99th=[10945] 00:25:15.505 bw ( KiB/s): min=38139, max=39168, per=99.95%, avg=38702.75, stdev=426.38, samples=4 00:25:15.505 iops : min= 9534, max= 9792, avg=9675.50, stdev=106.93, samples=4 00:25:15.505 lat (msec) : 4=0.12%, 10=99.76%, 20=0.12% 00:25:15.505 cpu : usr=70.42%, sys=26.58%, ctx=40, majf=0, minf=6 00:25:15.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:15.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:15.505 issued rwts: total=19400,19418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:15.505 00:25:15.505 Run status group 0 (all jobs): 00:25:15.505 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.8MiB (79.5MB), run=2006-2006msec 00:25:15.505 WRITE: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.9MiB (79.5MB), run=2006-2006msec 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:15.505 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:15.506 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:15.506 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:15.506 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:15.506 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:15.506 00:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:15.506 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:15.506 fio-3.35 00:25:15.506 Starting 1 thread 00:25:15.506 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.889 [2024-07-16 00:36:30.511443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b240 is same with the state(5) to be set 00:25:16.889 [2024-07-16 00:36:30.511507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b240 is same with the state(5) to be set 00:25:17.832 00:25:17.832 test: (groupid=0, jobs=1): err= 0: pid=1206730: Tue Jul 16 00:36:31 2024 00:25:17.832 read: IOPS=9075, BW=142MiB/s (149MB/s)(285MiB/2007msec) 00:25:17.832 slat (usec): min=3, max=109, avg= 3.68, stdev= 1.66 00:25:17.832 clat (usec): min=2508, max=18760, avg=8683.38, stdev=2258.98 00:25:17.832 lat (usec): min=2512, max=18763, avg=8687.06, stdev=2259.13 00:25:17.832 clat percentiles (usec): 00:25:17.832 | 1.00th=[ 4424], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6652], 00:25:17.832 | 30.00th=[ 7373], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:25:17.832 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[11731], 95.00th=[12387], 00:25:17.832 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15533], 99.95th=[16712], 00:25:17.832 | 99.99th=[17171] 00:25:17.832 bw ( KiB/s): min=68800, max=77632, per=49.52%, avg=71904.00, stdev=3913.17, samples=4 00:25:17.832 iops : min= 4300, max= 4852, avg=4494.00, stdev=244.57, samples=4 00:25:17.832 write: IOPS=5279, BW=82.5MiB/s (86.5MB/s)(147MiB/1779msec); 0 zone resets 00:25:17.832 slat (usec): min=40, max=555, avg=41.32, stdev=10.01 00:25:17.832 clat (usec): min=3922, max=19355, avg=9544.32, stdev=1736.61 00:25:17.832 lat (usec): min=3962, max=19395, avg=9585.65, stdev=1738.35 00:25:17.832 clat percentiles (usec): 00:25:17.832 | 1.00th=[ 6194], 5.00th=[ 7111], 10.00th=[ 7635], 20.00th=[ 8160], 00:25:17.832 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:25:17.832 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11731], 95.00th=[12780], 00:25:17.832 | 99.00th=[14746], 99.50th=[15926], 99.90th=[17957], 99.95th=[18744], 00:25:17.832 | 99.99th=[19268] 00:25:17.832 bw ( KiB/s): min=70208, max=80896, per=88.50%, avg=74752.00, stdev=4528.42, samples=4 00:25:17.832 iops : min= 4388, max= 5056, avg=4672.00, stdev=283.03, samples=4 00:25:17.832 lat (msec) : 4=0.26%, 10=69.79%, 20=29.96% 00:25:17.832 cpu : usr=82.70%, sys=14.96%, ctx=15, majf=0, minf=13 00:25:17.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:17.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:17.832 issued rwts: total=18215,9392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:17.832 00:25:17.832 Run status group 0 (all jobs): 00:25:17.832 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (298MB), run=2007-2007msec 00:25:17.832 WRITE: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=147MiB (154MB), run=1779-1779msec 00:25:17.832 00:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.093 rmmod nvme_tcp 00:25:18.093 rmmod nvme_fabrics 00:25:18.093 rmmod nvme_keyring 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1205375 ']' 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1205375 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1205375 ']' 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1205375 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1205375 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1205375' 00:25:18.093 killing process with pid 1205375 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1205375 00:25:18.093 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1205375 00:25:18.354 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.354 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.354 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.355 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.355 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.355 00:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.355 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.355 00:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.268 00:36:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.268 00:25:20.268 real 0m18.023s 00:25:20.268 user 1m2.579s 00:25:20.268 sys 0m7.966s 00:25:20.268 00:36:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:20.268 00:36:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.268 ************************************ 00:25:20.268 END TEST nvmf_fio_host 00:25:20.269 ************************************ 00:25:20.269 00:36:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:20.269 00:36:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:20.269 00:36:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:20.269 00:36:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.269 00:36:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.530 ************************************ 00:25:20.530 START TEST nvmf_failover 00:25:20.530 ************************************ 00:25:20.530 00:36:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:20.530 * Looking for test storage... 00:25:20.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.530 00:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:28.665 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:28.665 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:28.665 Found net devices under 0000:31:00.0: cvl_0_0 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:28.665 Found net devices under 0000:31:00.1: cvl_0_1 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.665 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:25:28.666 00:25:28.666 --- 10.0.0.2 ping statistics --- 00:25:28.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.666 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:25:28.666 00:25:28.666 --- 10.0.0.1 ping statistics --- 00:25:28.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.666 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.666 00:36:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1211755 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1211755 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1211755 ']' 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.666 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.666 [2024-07-16 00:36:42.066360] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:28.666 [2024-07-16 00:36:42.066408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.666 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.666 [2024-07-16 00:36:42.132406] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:28.666 [2024-07-16 00:36:42.186403] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.666 [2024-07-16 00:36:42.186436] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.666 [2024-07-16 00:36:42.186442] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.666 [2024-07-16 00:36:42.186450] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.666 [2024-07-16 00:36:42.186454] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.666 [2024-07-16 00:36:42.186557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.666 [2024-07-16 00:36:42.186714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.666 [2024-07-16 00:36:42.186716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.238 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.238 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:29.238 00:36:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.238 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.238 00:36:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:29.499 00:36:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.499 00:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:29.499 [2024-07-16 00:36:43.018168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.499 00:36:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:29.760 Malloc0 00:25:29.760 00:36:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:30.020 00:36:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.020 00:36:43 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.280 [2024-07-16 00:36:43.701500] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.280 00:36:43 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:30.280 [2024-07-16 00:36:43.869885] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:30.280 00:36:43 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:30.541 [2024-07-16 00:36:44.038408] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1212261 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1212261 /var/tmp/bdevperf.sock 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1212261 ']' 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.541 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.488 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.488 00:36:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:31.488 00:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:31.748 NVMe0n1 00:25:31.748 00:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.008 00:25:32.008 00:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1212461 00:25:32.008 00:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:32.008 00:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:32.952 00:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.213 [2024-07-16 00:36:46.654628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 [2024-07-16 00:36:46.654823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3ad0 is same with the state(5) to be set 00:25:33.213 00:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:36.515 00:36:49 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.515 00:25:36.515 00:36:49 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:36.515 [2024-07-16 00:36:50.137689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.515 [2024-07-16 00:36:50.137799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5200 is same with the state(5) to be set 00:25:36.776 00:36:50 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:40.075 00:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.075 [2024-07-16 00:36:53.316277] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.075 00:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:41.092 00:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:41.092 [2024-07-16 00:36:54.495611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.092 [2024-07-16 00:36:54.495645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.092 [2024-07-16 00:36:54.495651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.092 [2024-07-16 00:36:54.495656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.092 [2024-07-16 00:36:54.495660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.092 [2024-07-16 00:36:54.495665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.092 [2024-07-16 00:36:54.495670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 [2024-07-16 00:36:54.495722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5f70 is same with the state(5) to be set 00:25:41.093 00:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1212461 00:25:47.741 0 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1212261 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1212261 ']' 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1212261 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1212261 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1212261' 00:25:47.741 killing process with pid 1212261 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1212261 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1212261 00:25:47.741 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:47.741 [2024-07-16 00:36:44.110566] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:47.741 [2024-07-16 00:36:44.110631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212261 ] 00:25:47.741 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.741 [2024-07-16 00:36:44.176092] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.741 [2024-07-16 00:36:44.240376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.741 Running I/O for 15 seconds... 00:25:47.741 [2024-07-16 00:36:46.656796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.741 [2024-07-16 00:36:46.656827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.656838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.741 [2024-07-16 00:36:46.656846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.656854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.741 [2024-07-16 00:36:46.656862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.656870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.741 [2024-07-16 00:36:46.656877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.656886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1fa0 is same with the state(5) to be set 00:25:47.741 [2024-07-16 00:36:46.656943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.656953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.656968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.656978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.656988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.656997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.741 [2024-07-16 00:36:46.657403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.741 [2024-07-16 00:36:46.657410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.742 [2024-07-16 00:36:46.657528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.742 [2024-07-16 00:36:46.657545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.742 [2024-07-16 00:36:46.657562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.742 [2024-07-16 00:36:46.657578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.742 [2024-07-16 00:36:46.657595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.742 [2024-07-16 00:36:46.657611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.657984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.657990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.742 [2024-07-16 00:36:46.658117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.742 [2024-07-16 00:36:46.658125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.743 [2024-07-16 00:36:46.658805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.743 [2024-07-16 00:36:46.658814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.743 [2024-07-16 00:36:46.658821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.658989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.658998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.659006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.659015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.659022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.659031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.659038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.659048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:46.659055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.659064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:46.659071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.659088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.744 [2024-07-16 00:36:46.659095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.744 [2024-07-16 00:36:46.659102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94000 len:8 PRP1 0x0 PRP2 0x0 00:25:47.744 [2024-07-16 00:36:46.659109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:46.659144] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ecde20 was disconnected and freed. reset controller. 00:25:47.744 [2024-07-16 00:36:46.659154] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:47.744 [2024-07-16 00:36:46.659162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.744 [2024-07-16 00:36:46.662668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.744 [2024-07-16 00:36:46.662691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed1fa0 (9): Bad file descriptor 00:25:47.744 [2024-07-16 00:36:46.705515] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.744 [2024-07-16 00:36:50.140884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.140919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.140936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.140949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.140959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.140966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.140975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.140983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.140992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.140999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.141015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.141032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.744 [2024-07-16 00:36:50.141066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.744 [2024-07-16 00:36:50.141321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.744 [2024-07-16 00:36:50.141331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.745 [2024-07-16 00:36:50.141870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.745 [2024-07-16 00:36:50.141879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.141886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.141895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.141903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.141912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.141919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.141928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.141935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.141944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.141950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.141960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.141967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.141976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.141984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.141993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.746 [2024-07-16 00:36:50.142591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.746 [2024-07-16 00:36:50.142600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.747 [2024-07-16 00:36:50.142956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.142976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.747 [2024-07-16 00:36:50.142984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22320 len:8 PRP1 0x0 PRP2 0x0 00:25:47.747 [2024-07-16 00:36:50.142992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.747 [2024-07-16 00:36:50.143008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.747 [2024-07-16 00:36:50.143014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22328 len:8 PRP1 0x0 PRP2 0x0 00:25:47.747 [2024-07-16 00:36:50.143021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.747 [2024-07-16 00:36:50.143033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.747 [2024-07-16 00:36:50.143040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:8 PRP1 0x0 PRP2 0x0 00:25:47.747 [2024-07-16 00:36:50.143047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.747 [2024-07-16 00:36:50.143059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.747 [2024-07-16 00:36:50.143065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22344 len:8 PRP1 0x0 PRP2 0x0 00:25:47.747 [2024-07-16 00:36:50.143072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.747 [2024-07-16 00:36:50.143092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.747 [2024-07-16 00:36:50.143100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22352 len:8 PRP1 0x0 PRP2 0x0 00:25:47.747 [2024-07-16 00:36:50.143107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.747 [2024-07-16 00:36:50.143119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.747 [2024-07-16 00:36:50.143125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22360 len:8 PRP1 0x0 PRP2 0x0 00:25:47.747 [2024-07-16 00:36:50.143133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.747 [2024-07-16 00:36:50.143146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.747 [2024-07-16 00:36:50.143152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:8 PRP1 0x0 PRP2 0x0 00:25:47.747 [2024-07-16 00:36:50.143159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143194] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1efeb40 was disconnected and freed. reset controller. 00:25:47.747 [2024-07-16 00:36:50.143205] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:47.747 [2024-07-16 00:36:50.143223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.747 [2024-07-16 00:36:50.143235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.747 [2024-07-16 00:36:50.143252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.747 [2024-07-16 00:36:50.143267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.747 [2024-07-16 00:36:50.143281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:50.143289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.747 [2024-07-16 00:36:50.143313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed1fa0 (9): Bad file descriptor 00:25:47.747 [2024-07-16 00:36:50.146803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.747 [2024-07-16 00:36:50.227523] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.747 [2024-07-16 00:36:54.497954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.747 [2024-07-16 00:36:54.497992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:54.498010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.747 [2024-07-16 00:36:54.498019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.747 [2024-07-16 00:36:54.498034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.747 [2024-07-16 00:36:54.498043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.748 [2024-07-16 00:36:54.498344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.748 [2024-07-16 00:36:54.498712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.748 [2024-07-16 00:36:54.498721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.498728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.498745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.498761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.498777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.498794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.498810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.498826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.498985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.498992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.499001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.499008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.499018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.499025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.499034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.499041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.499050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.499057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.499067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.749 [2024-07-16 00:36:54.499074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.499083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.749 [2024-07-16 00:36:54.499090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.749 [2024-07-16 00:36:54.499100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.750 [2024-07-16 00:36:54.499433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.750 [2024-07-16 00:36:54.499442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.751 [2024-07-16 00:36:54.499604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38368 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38376 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38384 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38392 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38400 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38408 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38416 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38424 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38432 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38440 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38448 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38456 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38464 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.499974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.499980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38472 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.499988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.499995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.500000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.500006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38480 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.500014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.500021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.500027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.500033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38488 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.500040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.500047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.500052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.500058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38496 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.500072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.500078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.500083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38504 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.500091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.500100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.500106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.751 [2024-07-16 00:36:54.500112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38512 len:8 PRP1 0x0 PRP2 0x0 00:25:47.751 [2024-07-16 00:36:54.500119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.751 [2024-07-16 00:36:54.500126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.751 [2024-07-16 00:36:54.500132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38520 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38528 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38536 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38544 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38552 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38560 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38568 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38576 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38584 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38592 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.500396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.500401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.500407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38600 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.511892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.511924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.511932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.511940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38608 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.511948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.511956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.752 [2024-07-16 00:36:54.511961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.752 [2024-07-16 00:36:54.511967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37872 len:8 PRP1 0x0 PRP2 0x0 00:25:47.752 [2024-07-16 00:36:54.511974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.512015] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f01b50 was disconnected and freed. reset controller. 00:25:47.752 [2024-07-16 00:36:54.512024] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:47.752 [2024-07-16 00:36:54.512051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.752 [2024-07-16 00:36:54.512061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.512071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.752 [2024-07-16 00:36:54.512078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.512086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.752 [2024-07-16 00:36:54.512094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.512102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.752 [2024-07-16 00:36:54.512110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.752 [2024-07-16 00:36:54.512117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.752 [2024-07-16 00:36:54.512161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed1fa0 (9): Bad file descriptor 00:25:47.752 [2024-07-16 00:36:54.515685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.752 [2024-07-16 00:36:54.556159] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.752 00:25:47.752 Latency(us) 00:25:47.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.752 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:47.752 Verification LBA range: start 0x0 length 0x4000 00:25:47.752 NVMe0n1 : 15.01 11165.70 43.62 376.16 0.00 11061.29 781.65 21517.65 00:25:47.752 =================================================================================================================== 00:25:47.752 Total : 11165.70 43.62 376.16 0.00 11061.29 781.65 21517.65 00:25:47.752 Received shutdown signal, test time was about 15.000000 seconds 00:25:47.752 00:25:47.752 Latency(us) 00:25:47.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.752 =================================================================================================================== 00:25:47.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1215464 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1215464 /var/tmp/bdevperf.sock 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1215464 ']' 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.752 00:37:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:48.324 00:37:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.324 00:37:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:48.324 00:37:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.324 [2024-07-16 00:37:01.827264] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:48.324 00:37:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:48.584 [2024-07-16 00:37:01.999628] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:48.585 00:37:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.845 NVMe0n1 00:25:48.845 00:37:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:49.105 00:25:49.105 00:37:02 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:49.365 00:25:49.365 00:37:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:49.365 00:37:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:49.626 00:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:49.885 00:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:53.181 00:37:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:53.181 00:37:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:53.181 00:37:06 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1216486 00:25:53.181 00:37:06 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:53.181 00:37:06 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1216486 00:25:54.122 0 00:25:54.122 00:37:07 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.122 [2024-07-16 00:37:00.899217] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:54.122 [2024-07-16 00:37:00.899285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215464 ] 00:25:54.122 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.122 [2024-07-16 00:37:00.965270] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.122 [2024-07-16 00:37:01.028323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.122 [2024-07-16 00:37:03.230950] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:54.122 [2024-07-16 00:37:03.230994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.122 [2024-07-16 00:37:03.231005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.122 [2024-07-16 00:37:03.231015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.122 [2024-07-16 00:37:03.231022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.122 [2024-07-16 00:37:03.231030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.122 [2024-07-16 00:37:03.231038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.122 [2024-07-16 00:37:03.231046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.122 [2024-07-16 00:37:03.231053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.122 [2024-07-16 00:37:03.231060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.122 [2024-07-16 00:37:03.231088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.122 [2024-07-16 00:37:03.231103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6cfa0 (9): Bad file descriptor 00:25:54.122 [2024-07-16 00:37:03.242371] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:54.122 Running I/O for 1 seconds... 00:25:54.122 00:25:54.122 Latency(us) 00:25:54.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.122 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:54.122 Verification LBA range: start 0x0 length 0x4000 00:25:54.123 NVMe0n1 : 1.01 11140.82 43.52 0.00 0.00 11434.58 2580.48 14417.92 00:25:54.123 =================================================================================================================== 00:25:54.123 Total : 11140.82 43.52 0.00 0.00 11434.58 2580.48 14417.92 00:25:54.123 00:37:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:54.123 00:37:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:54.123 00:37:07 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:54.382 00:37:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:54.382 00:37:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:54.643 00:37:08 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:54.643 00:37:08 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1215464 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1215464 ']' 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1215464 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1215464 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1215464' 00:25:57.944 killing process with pid 1215464 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1215464 00:25:57.944 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1215464 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.205 rmmod nvme_tcp 00:25:58.205 rmmod nvme_fabrics 00:25:58.205 rmmod nvme_keyring 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1211755 ']' 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1211755 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1211755 ']' 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1211755 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.205 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1211755 00:25:58.466 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:58.466 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:58.466 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1211755' 00:25:58.466 killing process with pid 1211755 00:25:58.466 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1211755 00:25:58.466 00:37:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1211755 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.466 00:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.010 00:37:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:01.010 00:26:01.010 real 0m40.157s 00:26:01.010 user 2m2.093s 00:26:01.010 sys 0m8.555s 00:26:01.010 00:37:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:01.010 00:37:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:01.010 ************************************ 00:26:01.010 END TEST nvmf_failover 00:26:01.010 ************************************ 00:26:01.010 00:37:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:01.010 00:37:14 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:01.010 00:37:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:01.010 00:37:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:01.010 00:37:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:01.010 ************************************ 00:26:01.010 START TEST nvmf_host_discovery 00:26:01.010 ************************************ 00:26:01.010 00:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:01.010 * Looking for test storage... 00:26:01.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.010 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.010 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:01.010 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.011 00:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:09.148 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:09.148 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:09.148 Found net devices under 0000:31:00.0: cvl_0_0 00:26:09.148 00:37:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.148 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.148 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.148 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.148 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.148 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.148 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.148 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:09.149 Found net devices under 0000:31:00.1: cvl_0_1 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:09.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:26:09.149 00:26:09.149 --- 10.0.0.2 ping statistics --- 00:26:09.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.149 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:26:09.149 00:26:09.149 --- 10.0.0.1 ping statistics --- 00:26:09.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.149 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1222185 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1222185 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1222185 ']' 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.149 00:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.149 [2024-07-16 00:37:22.435303] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:09.149 [2024-07-16 00:37:22.435372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.149 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.149 [2024-07-16 00:37:22.530764] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.149 [2024-07-16 00:37:22.624977] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.149 [2024-07-16 00:37:22.625046] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.149 [2024-07-16 00:37:22.625055] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.149 [2024-07-16 00:37:22.625061] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.149 [2024-07-16 00:37:22.625067] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.149 [2024-07-16 00:37:22.625093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.721 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.721 [2024-07-16 00:37:23.267858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.722 [2024-07-16 00:37:23.276055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.722 null0 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.722 null1 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1222514 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1222514 /tmp/host.sock 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1222514 ']' 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:09.722 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.722 00:37:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.982 [2024-07-16 00:37:23.357481] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:09.982 [2024-07-16 00:37:23.357546] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222514 ] 00:26:09.982 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.982 [2024-07-16 00:37:23.428632] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.982 [2024-07-16 00:37:23.502843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:10.553 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:10.814 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.076 [2024-07-16 00:37:24.511128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:11.076 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:11.077 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.338 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:11.338 00:37:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:11.599 [2024-07-16 00:37:25.205674] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:11.599 [2024-07-16 00:37:25.205694] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:11.599 [2024-07-16 00:37:25.205708] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:11.861 [2024-07-16 00:37:25.334121] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:11.861 [2024-07-16 00:37:25.438726] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:11.861 [2024-07-16 00:37:25.438749] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:12.122 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.384 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:12.385 00:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.646 [2024-07-16 00:37:26.239586] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:12.646 [2024-07-16 00:37:26.240785] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:12.646 [2024-07-16 00:37:26.240816] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:12.646 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.907 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.907 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.907 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.907 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:12.907 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.907 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.907 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:12.908 [2024-07-16 00:37:26.328500] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:12.908 00:37:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:13.169 [2024-07-16 00:37:26.639187] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:13.169 [2024-07-16 00:37:26.639206] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:13.169 [2024-07-16 00:37:26.639211] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.113 [2024-07-16 00:37:27.527636] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:14.113 [2024-07-16 00:37:27.527660] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:14.113 [2024-07-16 00:37:27.530802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.113 [2024-07-16 00:37:27.530820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.113 [2024-07-16 00:37:27.530829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.113 [2024-07-16 00:37:27.530836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.113 [2024-07-16 00:37:27.530844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.113 [2024-07-16 00:37:27.530851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.113 [2024-07-16 00:37:27.530859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.113 [2024-07-16 00:37:27.530866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.113 [2024-07-16 00:37:27.530873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.113 [2024-07-16 00:37:27.540817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.113 [2024-07-16 00:37:27.550855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.113 [2024-07-16 00:37:27.551220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.113 [2024-07-16 00:37:27.551241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfaaa0 with addr=10.0.0.2, port=4420 00:26:14.113 [2024-07-16 00:37:27.551255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.113 [2024-07-16 00:37:27.551267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.113 [2024-07-16 00:37:27.551278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.113 [2024-07-16 00:37:27.551285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.113 [2024-07-16 00:37:27.551293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.113 [2024-07-16 00:37:27.551304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.113 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.113 [2024-07-16 00:37:27.560912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.113 [2024-07-16 00:37:27.561493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.113 [2024-07-16 00:37:27.561531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfaaa0 with addr=10.0.0.2, port=4420 00:26:14.113 [2024-07-16 00:37:27.561542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.113 [2024-07-16 00:37:27.561561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.113 [2024-07-16 00:37:27.561587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.113 [2024-07-16 00:37:27.561595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.113 [2024-07-16 00:37:27.561603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.114 [2024-07-16 00:37:27.561618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.114 [2024-07-16 00:37:27.570964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.114 [2024-07-16 00:37:27.571205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.114 [2024-07-16 00:37:27.571220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfaaa0 with addr=10.0.0.2, port=4420 00:26:14.114 [2024-07-16 00:37:27.571228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.114 [2024-07-16 00:37:27.571246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.114 [2024-07-16 00:37:27.571257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.114 [2024-07-16 00:37:27.571264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.114 [2024-07-16 00:37:27.571271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.114 [2024-07-16 00:37:27.571281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.114 [2024-07-16 00:37:27.581022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.114 [2024-07-16 00:37:27.581446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.114 [2024-07-16 00:37:27.581484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfaaa0 with addr=10.0.0.2, port=4420 00:26:14.114 [2024-07-16 00:37:27.581495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.114 [2024-07-16 00:37:27.581514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.114 [2024-07-16 00:37:27.581531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.114 [2024-07-16 00:37:27.581538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.114 [2024-07-16 00:37:27.581545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.114 [2024-07-16 00:37:27.581560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:14.114 [2024-07-16 00:37:27.591078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.114 [2024-07-16 00:37:27.591593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.114 [2024-07-16 00:37:27.591632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfaaa0 with addr=10.0.0.2, port=4420 00:26:14.114 [2024-07-16 00:37:27.591642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.114 [2024-07-16 00:37:27.591661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.114 [2024-07-16 00:37:27.591697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.114 [2024-07-16 00:37:27.591706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.114 [2024-07-16 00:37:27.591715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.114 [2024-07-16 00:37:27.591731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.114 [2024-07-16 00:37:27.601134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.114 [2024-07-16 00:37:27.601690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.114 [2024-07-16 00:37:27.601706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfaaa0 with addr=10.0.0.2, port=4420 00:26:14.114 [2024-07-16 00:37:27.601714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.114 [2024-07-16 00:37:27.601726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.114 [2024-07-16 00:37:27.601749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.114 [2024-07-16 00:37:27.601757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.114 [2024-07-16 00:37:27.601769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.114 [2024-07-16 00:37:27.601779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.114 [2024-07-16 00:37:27.611191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.114 [2024-07-16 00:37:27.611485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.114 [2024-07-16 00:37:27.611498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfaaa0 with addr=10.0.0.2, port=4420 00:26:14.114 [2024-07-16 00:37:27.611505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfaaa0 is same with the state(5) to be set 00:26:14.114 [2024-07-16 00:37:27.611516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfaaa0 (9): Bad file descriptor 00:26:14.114 [2024-07-16 00:37:27.611526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.114 [2024-07-16 00:37:27.611533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.114 [2024-07-16 00:37:27.611540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.114 [2024-07-16 00:37:27.611550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.114 [2024-07-16 00:37:27.614791] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:14.114 [2024-07-16 00:37:27.614808] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.114 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.377 00:37:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.320 [2024-07-16 00:37:28.936371] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:15.320 [2024-07-16 00:37:28.936388] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:15.320 [2024-07-16 00:37:28.936400] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:15.581 [2024-07-16 00:37:29.063808] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:15.842 [2024-07-16 00:37:29.334391] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:15.842 [2024-07-16 00:37:29.334421] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 request: 00:26:15.842 { 00:26:15.842 "name": "nvme", 00:26:15.842 "trtype": "tcp", 00:26:15.842 "traddr": "10.0.0.2", 00:26:15.842 "adrfam": "ipv4", 00:26:15.842 "trsvcid": "8009", 00:26:15.842 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:15.842 "wait_for_attach": true, 00:26:15.842 "method": "bdev_nvme_start_discovery", 00:26:15.842 "req_id": 1 00:26:15.842 } 00:26:15.842 Got JSON-RPC error response 00:26:15.842 response: 00:26:15.842 { 00:26:15.842 "code": -17, 00:26:15.842 "message": "File exists" 00:26:15.842 } 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:15.842 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.843 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.104 request: 00:26:16.104 { 00:26:16.104 "name": "nvme_second", 00:26:16.104 "trtype": "tcp", 00:26:16.104 "traddr": "10.0.0.2", 00:26:16.104 "adrfam": "ipv4", 00:26:16.104 "trsvcid": "8009", 00:26:16.104 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:16.104 "wait_for_attach": true, 00:26:16.104 "method": "bdev_nvme_start_discovery", 00:26:16.104 "req_id": 1 00:26:16.104 } 00:26:16.104 Got JSON-RPC error response 00:26:16.104 response: 00:26:16.104 { 00:26:16.104 "code": -17, 00:26:16.104 "message": "File exists" 00:26:16.104 } 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.104 00:37:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.045 [2024-07-16 00:37:30.593941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.045 [2024-07-16 00:37:30.593975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd84c0 with addr=10.0.0.2, port=8010 00:26:17.045 [2024-07-16 00:37:30.593989] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:17.045 [2024-07-16 00:37:30.593997] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:17.045 [2024-07-16 00:37:30.594004] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:17.987 [2024-07-16 00:37:31.596254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.987 [2024-07-16 00:37:31.596278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd84c0 with addr=10.0.0.2, port=8010 00:26:17.987 [2024-07-16 00:37:31.596289] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:17.987 [2024-07-16 00:37:31.596295] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:17.987 [2024-07-16 00:37:31.596302] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:19.372 [2024-07-16 00:37:32.598227] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:19.372 request: 00:26:19.372 { 00:26:19.372 "name": "nvme_second", 00:26:19.372 "trtype": "tcp", 00:26:19.372 "traddr": "10.0.0.2", 00:26:19.372 "adrfam": "ipv4", 00:26:19.372 "trsvcid": "8010", 00:26:19.372 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.372 "wait_for_attach": false, 00:26:19.372 "attach_timeout_ms": 3000, 00:26:19.372 "method": "bdev_nvme_start_discovery", 00:26:19.372 "req_id": 1 00:26:19.372 } 00:26:19.372 Got JSON-RPC error response 00:26:19.372 response: 00:26:19.372 { 00:26:19.372 "code": -110, 00:26:19.372 "message": "Connection timed out" 00:26:19.372 } 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1222514 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.372 rmmod nvme_tcp 00:26:19.372 rmmod nvme_fabrics 00:26:19.372 rmmod nvme_keyring 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1222185 ']' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1222185 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1222185 ']' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1222185 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222185 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222185' 00:26:19.372 killing process with pid 1222185 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1222185 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1222185 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.372 00:37:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.915 00:37:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.915 00:26:21.915 real 0m20.808s 00:26:21.915 user 0m23.756s 00:26:21.915 sys 0m7.456s 00:26:21.915 00:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.915 00:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.915 ************************************ 00:26:21.915 END TEST nvmf_host_discovery 00:26:21.915 ************************************ 00:26:21.915 00:37:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:21.915 00:37:35 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:21.915 00:37:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:21.915 00:37:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.915 00:37:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.915 ************************************ 00:26:21.915 START TEST nvmf_host_multipath_status 00:26:21.915 ************************************ 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:21.915 * Looking for test storage... 00:26:21.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:21.915 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.916 00:37:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.119 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:30.120 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:30.120 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:30.120 Found net devices under 0000:31:00.0: cvl_0_0 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:30.120 Found net devices under 0000:31:00.1: cvl_0_1 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:30.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:26:30.120 00:26:30.120 --- 10.0.0.2 ping statistics --- 00:26:30.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.120 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:26:30.120 00:26:30.120 --- 10.0.0.1 ping statistics --- 00:26:30.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.120 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1229045 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1229045 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1229045 ']' 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.120 00:37:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:30.120 [2024-07-16 00:37:43.452189] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:30.120 [2024-07-16 00:37:43.452260] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.120 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.120 [2024-07-16 00:37:43.530174] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:30.120 [2024-07-16 00:37:43.603859] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.120 [2024-07-16 00:37:43.603895] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.120 [2024-07-16 00:37:43.603902] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.120 [2024-07-16 00:37:43.603909] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.120 [2024-07-16 00:37:43.603915] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.120 [2024-07-16 00:37:43.604051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.120 [2024-07-16 00:37:43.604053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1229045 00:26:30.692 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:30.953 [2024-07-16 00:37:44.396254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.953 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:30.953 Malloc0 00:26:31.214 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:31.214 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:31.475 00:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.475 [2024-07-16 00:37:45.032082] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.475 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:31.736 [2024-07-16 00:37:45.172416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1229408 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1229408 /var/tmp/bdevperf.sock 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1229408 ']' 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.736 00:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:32.679 00:37:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.679 00:37:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:32.679 00:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:32.679 00:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:33.250 Nvme0n1 00:26:33.250 00:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:33.511 Nvme0n1 00:26:33.511 00:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:33.511 00:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:35.424 00:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:35.424 00:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:35.685 00:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:35.685 00:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.068 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.328 00:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.587 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.587 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.587 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.587 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.849 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.849 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:37.849 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:37.849 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.109 00:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:39.050 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:39.050 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.050 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.050 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.310 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.310 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:39.310 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.310 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.569 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.569 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.569 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.569 00:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:39.569 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.569 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:39.569 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.569 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:39.829 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.829 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:39.829 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.829 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:39.829 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.829 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.089 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.089 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.089 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.089 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:40.089 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:40.351 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:40.351 00:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:41.412 00:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:41.412 00:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:41.412 00:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.412 00:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:41.672 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.672 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:41.672 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.672 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.932 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.192 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.192 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.192 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.192 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.452 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.452 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.452 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.452 00:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.452 00:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.452 00:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:42.452 00:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:42.711 00:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:42.711 00:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.091 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.351 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.351 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:44.351 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.351 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:44.611 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.611 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:44.612 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.612 00:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:44.612 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.612 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:44.612 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.612 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:44.872 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.872 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:44.872 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:44.872 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:45.133 00:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:46.093 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:46.093 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:46.093 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.093 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:46.353 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:46.353 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:46.353 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.353 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:46.614 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:46.614 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:46.614 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.614 00:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:46.614 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.614 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:46.614 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.614 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.874 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:47.137 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.137 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:47.137 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:47.396 00:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:47.397 00:38:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.779 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:49.040 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.040 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:49.040 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.040 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:49.300 00:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.560 00:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.560 00:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:49.822 00:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:49.822 00:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:49.822 00:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:50.083 00:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:51.024 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:51.024 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:51.024 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.024 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.284 00:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:51.545 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.545 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:51.545 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.545 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.806 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:52.067 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.067 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:52.067 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.329 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:52.329 00:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:53.714 00:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:53.714 00:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:53.714 00:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.714 00:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.714 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.974 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.974 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.974 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.974 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.235 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:54.496 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.496 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:54.496 00:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:54.496 00:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:54.757 00:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:55.701 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:55.701 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:55.701 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.701 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.961 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.961 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:55.962 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.962 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.223 00:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.484 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.484 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:56.484 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.484 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.746 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.746 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:56.746 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.746 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.746 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.746 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:56.746 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:57.007 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:57.268 00:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:58.212 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:58.212 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:58.212 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.212 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.473 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.473 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:58.473 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.473 00:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.473 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.473 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.473 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.473 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.734 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.734 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.734 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.734 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.995 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1229408 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1229408 ']' 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1229408 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229408 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229408' 00:26:59.256 killing process with pid 1229408 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1229408 00:26:59.256 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1229408 00:26:59.256 Connection closed with partial response: 00:26:59.256 00:26:59.256 00:26:59.521 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1229408 00:26:59.521 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:59.521 [2024-07-16 00:37:45.244569] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:59.521 [2024-07-16 00:37:45.244644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229408 ] 00:26:59.521 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.521 [2024-07-16 00:37:45.300862] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.521 [2024-07-16 00:37:45.352628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.521 Running I/O for 90 seconds... 00:26:59.521 [2024-07-16 00:37:58.440197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.521 [2024-07-16 00:37:58.440661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.521 [2024-07-16 00:37:58.440678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.521 [2024-07-16 00:37:58.440694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.521 [2024-07-16 00:37:58.440711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.521 [2024-07-16 00:37:58.440728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.521 [2024-07-16 00:37:58.440743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.521 [2024-07-16 00:37:58.440759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.440990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.440995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:59.521 [2024-07-16 00:37:58.441007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.521 [2024-07-16 00:37:58.441012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.441943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.441948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.522 [2024-07-16 00:37:58.442829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.522 [2024-07-16 00:37:58.442848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.522 [2024-07-16 00:37:58.442867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.522 [2024-07-16 00:37:58.442887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.522 [2024-07-16 00:37:58.442907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.522 [2024-07-16 00:37:58.442926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.522 [2024-07-16 00:37:58.442945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:59.522 [2024-07-16 00:37:58.442958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.522 [2024-07-16 00:37:58.442963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.442977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.442982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.442999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.443138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.443189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.443210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.443233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.443261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.443282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.443302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.443317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.443323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.523 [2024-07-16 00:37:58.444558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.523 [2024-07-16 00:37:58.444579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:59.523 [2024-07-16 00:37:58.444595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:37:58.444917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:37:58.444922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.679379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.679418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.679448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:38:10.679454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.679465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:38:10.679470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.679957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:38:10.679967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.679978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:38:10.679983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.679998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.524 [2024-07-16 00:38:10.680129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:59.524 [2024-07-16 00:38:10.680275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.524 [2024-07-16 00:38:10.680284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:59.524 Received shutdown signal, test time was about 25.722680 seconds 00:26:59.524 00:26:59.524 Latency(us) 00:26:59.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.524 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:59.524 Verification LBA range: start 0x0 length 0x4000 00:26:59.524 Nvme0n1 : 25.72 10881.79 42.51 0.00 0.00 11743.65 314.03 3019898.88 00:26:59.524 =================================================================================================================== 00:26:59.524 Total : 10881.79 42.51 0.00 0.00 11743.65 314.03 3019898.88 00:26:59.524 00:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.524 rmmod nvme_tcp 00:26:59.524 rmmod nvme_fabrics 00:26:59.524 rmmod nvme_keyring 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:59.524 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1229045 ']' 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1229045 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1229045 ']' 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1229045 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229045 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229045' 00:26:59.787 killing process with pid 1229045 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1229045 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1229045 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.787 00:38:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.800 00:38:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.800 00:27:01.800 real 0m40.366s 00:27:01.800 user 1m42.006s 00:27:01.800 sys 0m11.363s 00:27:01.800 00:38:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.800 00:38:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:01.800 ************************************ 00:27:01.800 END TEST nvmf_host_multipath_status 00:27:01.800 ************************************ 00:27:02.061 00:38:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:02.061 00:38:15 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:02.061 00:38:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:02.061 00:38:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.061 00:38:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:02.061 ************************************ 00:27:02.061 START TEST nvmf_discovery_remove_ifc 00:27:02.061 ************************************ 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:02.061 * Looking for test storage... 00:27:02.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.061 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:02.062 00:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:10.206 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:10.206 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:10.206 Found net devices under 0000:31:00.0: cvl_0_0 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:10.206 Found net devices under 0000:31:00.1: cvl_0_1 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.206 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.467 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:27:10.468 00:27:10.468 --- 10.0.0.2 ping statistics --- 00:27:10.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.468 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:27:10.468 00:27:10.468 --- 10.0.0.1 ping statistics --- 00:27:10.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.468 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1239716 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1239716 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1239716 ']' 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.468 00:38:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.468 [2024-07-16 00:38:23.975755] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:10.468 [2024-07-16 00:38:23.975821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.468 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.468 [2024-07-16 00:38:24.070676] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.728 [2024-07-16 00:38:24.164305] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.728 [2024-07-16 00:38:24.164361] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.728 [2024-07-16 00:38:24.164370] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.728 [2024-07-16 00:38:24.164377] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.728 [2024-07-16 00:38:24.164383] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.728 [2024-07-16 00:38:24.164417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.300 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.300 [2024-07-16 00:38:24.815792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.300 [2024-07-16 00:38:24.824013] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:11.300 null0 00:27:11.300 [2024-07-16 00:38:24.855972] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1239979 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1239979 /tmp/host.sock 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1239979 ']' 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:11.301 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.301 00:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.561 [2024-07-16 00:38:24.941219] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:11.561 [2024-07-16 00:38:24.941295] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239979 ] 00:27:11.561 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.561 [2024-07-16 00:38:25.012182] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.561 [2024-07-16 00:38:25.087047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.132 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.392 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.392 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:12.392 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.392 00:38:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.332 [2024-07-16 00:38:26.845411] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:13.332 [2024-07-16 00:38:26.845432] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:13.332 [2024-07-16 00:38:26.845445] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:13.332 [2024-07-16 00:38:26.933731] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:13.592 [2024-07-16 00:38:27.159670] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:13.592 [2024-07-16 00:38:27.159717] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:13.592 [2024-07-16 00:38:27.159738] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:13.592 [2024-07-16 00:38:27.159753] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:13.592 [2024-07-16 00:38:27.159772] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:13.592 [2024-07-16 00:38:27.163898] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2095610 was disconnected and freed. delete nvme_qpair. 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:13.592 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:13.853 00:38:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.792 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.074 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:15.074 00:38:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:16.018 00:38:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:16.959 00:38:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:18.343 00:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:19.285 [2024-07-16 00:38:32.600310] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:19.285 [2024-07-16 00:38:32.600358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.285 [2024-07-16 00:38:32.600370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.285 [2024-07-16 00:38:32.600380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.285 [2024-07-16 00:38:32.600388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.285 [2024-07-16 00:38:32.600396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.285 [2024-07-16 00:38:32.600403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.285 [2024-07-16 00:38:32.600410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.285 [2024-07-16 00:38:32.600418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.285 [2024-07-16 00:38:32.600426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.285 [2024-07-16 00:38:32.600433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.285 [2024-07-16 00:38:32.600440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c110 is same with the state(5) to be set 00:27:19.285 [2024-07-16 00:38:32.610329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205c110 (9): Bad file descriptor 00:27:19.285 [2024-07-16 00:38:32.620368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:19.285 00:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:19.285 00:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.285 00:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.285 00:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.285 00:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:19.285 00:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:19.285 00:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:20.226 [2024-07-16 00:38:33.681258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:20.226 [2024-07-16 00:38:33.681303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205c110 with addr=10.0.0.2, port=4420 00:27:20.226 [2024-07-16 00:38:33.681315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c110 is same with the state(5) to be set 00:27:20.226 [2024-07-16 00:38:33.681342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205c110 (9): Bad file descriptor 00:27:20.226 [2024-07-16 00:38:33.681716] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:20.226 [2024-07-16 00:38:33.681738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:20.226 [2024-07-16 00:38:33.681747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:20.226 [2024-07-16 00:38:33.681755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:20.226 [2024-07-16 00:38:33.681774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:20.226 [2024-07-16 00:38:33.681783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:20.226 00:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.226 00:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:20.226 00:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:21.166 [2024-07-16 00:38:34.684164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:21.166 [2024-07-16 00:38:34.684188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:21.166 [2024-07-16 00:38:34.684196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:21.166 [2024-07-16 00:38:34.684204] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:21.166 [2024-07-16 00:38:34.684217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.166 [2024-07-16 00:38:34.684244] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:21.166 [2024-07-16 00:38:34.684272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.166 [2024-07-16 00:38:34.684284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.166 [2024-07-16 00:38:34.684294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.166 [2024-07-16 00:38:34.684301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.166 [2024-07-16 00:38:34.684309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.166 [2024-07-16 00:38:34.684317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.166 [2024-07-16 00:38:34.684325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.166 [2024-07-16 00:38:34.684332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.166 [2024-07-16 00:38:34.684340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.166 [2024-07-16 00:38:34.684347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.166 [2024-07-16 00:38:34.684354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:21.166 [2024-07-16 00:38:34.684725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205b590 (9): Bad file descriptor 00:27:21.166 [2024-07-16 00:38:34.685737] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:21.166 [2024-07-16 00:38:34.685750] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.166 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:21.426 00:38:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:22.365 00:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:23.312 [2024-07-16 00:38:36.741432] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:23.312 [2024-07-16 00:38:36.741452] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:23.313 [2024-07-16 00:38:36.741466] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:23.313 [2024-07-16 00:38:36.828737] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:23.313 [2024-07-16 00:38:36.930611] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:23.313 [2024-07-16 00:38:36.930648] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:23.313 [2024-07-16 00:38:36.930666] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:23.313 [2024-07-16 00:38:36.930680] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:23.313 [2024-07-16 00:38:36.930687] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:23.313 [2024-07-16 00:38:36.937934] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20a1320 was disconnected and freed. delete nvme_qpair. 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.581 00:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1239979 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1239979 ']' 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1239979 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1239979 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1239979' 00:27:23.581 killing process with pid 1239979 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1239979 00:27:23.581 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1239979 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.841 rmmod nvme_tcp 00:27:23.841 rmmod nvme_fabrics 00:27:23.841 rmmod nvme_keyring 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1239716 ']' 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1239716 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1239716 ']' 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1239716 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1239716 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1239716' 00:27:23.841 killing process with pid 1239716 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1239716 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1239716 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.841 00:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.389 00:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.389 00:27:26.389 real 0m24.029s 00:27:26.389 user 0m27.546s 00:27:26.389 sys 0m7.409s 00:27:26.389 00:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.389 00:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.389 ************************************ 00:27:26.389 END TEST nvmf_discovery_remove_ifc 00:27:26.389 ************************************ 00:27:26.389 00:38:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:26.389 00:38:39 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:26.389 00:38:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:26.389 00:38:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.389 00:38:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.389 ************************************ 00:27:26.389 START TEST nvmf_identify_kernel_target 00:27:26.389 ************************************ 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:26.389 * Looking for test storage... 00:27:26.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.389 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.390 00:38:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:34.535 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:34.535 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:34.535 Found net devices under 0000:31:00.0: cvl_0_0 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.535 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:34.536 Found net devices under 0000:31:00.1: cvl_0_1 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:27:34.536 00:27:34.536 --- 10.0.0.2 ping statistics --- 00:27:34.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.536 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:27:34.536 00:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:27:34.536 00:27:34.536 --- 10.0.0.1 ping statistics --- 00:27:34.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.536 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:34.536 00:38:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:38.748 Waiting for block devices as requested 00:27:38.748 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:38.748 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:38.748 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:38.748 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:38.748 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:38.748 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:38.748 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:38.748 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:39.009 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:39.009 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:39.269 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:39.269 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:39.269 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:39.269 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:39.528 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:39.528 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:39.528 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:39.528 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:39.528 No valid GPT data, bailing 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:39.790 00:27:39.790 Discovery Log Number of Records 2, Generation counter 2 00:27:39.790 =====Discovery Log Entry 0====== 00:27:39.790 trtype: tcp 00:27:39.790 adrfam: ipv4 00:27:39.790 subtype: current discovery subsystem 00:27:39.790 treq: not specified, sq flow control disable supported 00:27:39.790 portid: 1 00:27:39.790 trsvcid: 4420 00:27:39.790 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:39.790 traddr: 10.0.0.1 00:27:39.790 eflags: none 00:27:39.790 sectype: none 00:27:39.790 =====Discovery Log Entry 1====== 00:27:39.790 trtype: tcp 00:27:39.790 adrfam: ipv4 00:27:39.790 subtype: nvme subsystem 00:27:39.790 treq: not specified, sq flow control disable supported 00:27:39.790 portid: 1 00:27:39.790 trsvcid: 4420 00:27:39.790 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:39.790 traddr: 10.0.0.1 00:27:39.790 eflags: none 00:27:39.790 sectype: none 00:27:39.790 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:39.790 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:39.791 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.791 ===================================================== 00:27:39.791 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:39.791 ===================================================== 00:27:39.791 Controller Capabilities/Features 00:27:39.791 ================================ 00:27:39.791 Vendor ID: 0000 00:27:39.791 Subsystem Vendor ID: 0000 00:27:39.791 Serial Number: 138c04f5f53a6d0fff25 00:27:39.791 Model Number: Linux 00:27:39.791 Firmware Version: 6.7.0-68 00:27:39.791 Recommended Arb Burst: 0 00:27:39.791 IEEE OUI Identifier: 00 00 00 00:27:39.791 Multi-path I/O 00:27:39.791 May have multiple subsystem ports: No 00:27:39.791 May have multiple controllers: No 00:27:39.791 Associated with SR-IOV VF: No 00:27:39.791 Max Data Transfer Size: Unlimited 00:27:39.791 Max Number of Namespaces: 0 00:27:39.791 Max Number of I/O Queues: 1024 00:27:39.791 NVMe Specification Version (VS): 1.3 00:27:39.791 NVMe Specification Version (Identify): 1.3 00:27:39.791 Maximum Queue Entries: 1024 00:27:39.791 Contiguous Queues Required: No 00:27:39.791 Arbitration Mechanisms Supported 00:27:39.791 Weighted Round Robin: Not Supported 00:27:39.791 Vendor Specific: Not Supported 00:27:39.791 Reset Timeout: 7500 ms 00:27:39.791 Doorbell Stride: 4 bytes 00:27:39.791 NVM Subsystem Reset: Not Supported 00:27:39.791 Command Sets Supported 00:27:39.791 NVM Command Set: Supported 00:27:39.791 Boot Partition: Not Supported 00:27:39.791 Memory Page Size Minimum: 4096 bytes 00:27:39.791 Memory Page Size Maximum: 4096 bytes 00:27:39.791 Persistent Memory Region: Not Supported 00:27:39.791 Optional Asynchronous Events Supported 00:27:39.791 Namespace Attribute Notices: Not Supported 00:27:39.791 Firmware Activation Notices: Not Supported 00:27:39.791 ANA Change Notices: Not Supported 00:27:39.791 PLE Aggregate Log Change Notices: Not Supported 00:27:39.791 LBA Status Info Alert Notices: Not Supported 00:27:39.791 EGE Aggregate Log Change Notices: Not Supported 00:27:39.791 Normal NVM Subsystem Shutdown event: Not Supported 00:27:39.791 Zone Descriptor Change Notices: Not Supported 00:27:39.791 Discovery Log Change Notices: Supported 00:27:39.791 Controller Attributes 00:27:39.791 128-bit Host Identifier: Not Supported 00:27:39.791 Non-Operational Permissive Mode: Not Supported 00:27:39.791 NVM Sets: Not Supported 00:27:39.791 Read Recovery Levels: Not Supported 00:27:39.791 Endurance Groups: Not Supported 00:27:39.791 Predictable Latency Mode: Not Supported 00:27:39.791 Traffic Based Keep ALive: Not Supported 00:27:39.791 Namespace Granularity: Not Supported 00:27:39.791 SQ Associations: Not Supported 00:27:39.791 UUID List: Not Supported 00:27:39.791 Multi-Domain Subsystem: Not Supported 00:27:39.791 Fixed Capacity Management: Not Supported 00:27:39.791 Variable Capacity Management: Not Supported 00:27:39.791 Delete Endurance Group: Not Supported 00:27:39.791 Delete NVM Set: Not Supported 00:27:39.791 Extended LBA Formats Supported: Not Supported 00:27:39.791 Flexible Data Placement Supported: Not Supported 00:27:39.791 00:27:39.791 Controller Memory Buffer Support 00:27:39.791 ================================ 00:27:39.791 Supported: No 00:27:39.791 00:27:39.791 Persistent Memory Region Support 00:27:39.791 ================================ 00:27:39.791 Supported: No 00:27:39.791 00:27:39.791 Admin Command Set Attributes 00:27:39.791 ============================ 00:27:39.791 Security Send/Receive: Not Supported 00:27:39.791 Format NVM: Not Supported 00:27:39.791 Firmware Activate/Download: Not Supported 00:27:39.791 Namespace Management: Not Supported 00:27:39.792 Device Self-Test: Not Supported 00:27:39.792 Directives: Not Supported 00:27:39.792 NVMe-MI: Not Supported 00:27:39.792 Virtualization Management: Not Supported 00:27:39.792 Doorbell Buffer Config: Not Supported 00:27:39.792 Get LBA Status Capability: Not Supported 00:27:39.792 Command & Feature Lockdown Capability: Not Supported 00:27:39.792 Abort Command Limit: 1 00:27:39.792 Async Event Request Limit: 1 00:27:39.792 Number of Firmware Slots: N/A 00:27:39.792 Firmware Slot 1 Read-Only: N/A 00:27:39.792 Firmware Activation Without Reset: N/A 00:27:39.792 Multiple Update Detection Support: N/A 00:27:39.792 Firmware Update Granularity: No Information Provided 00:27:39.792 Per-Namespace SMART Log: No 00:27:39.792 Asymmetric Namespace Access Log Page: Not Supported 00:27:39.792 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:39.792 Command Effects Log Page: Not Supported 00:27:39.792 Get Log Page Extended Data: Supported 00:27:39.792 Telemetry Log Pages: Not Supported 00:27:39.792 Persistent Event Log Pages: Not Supported 00:27:39.792 Supported Log Pages Log Page: May Support 00:27:39.792 Commands Supported & Effects Log Page: Not Supported 00:27:39.792 Feature Identifiers & Effects Log Page:May Support 00:27:39.792 NVMe-MI Commands & Effects Log Page: May Support 00:27:39.792 Data Area 4 for Telemetry Log: Not Supported 00:27:39.792 Error Log Page Entries Supported: 1 00:27:39.792 Keep Alive: Not Supported 00:27:39.792 00:27:39.792 NVM Command Set Attributes 00:27:39.792 ========================== 00:27:39.792 Submission Queue Entry Size 00:27:39.792 Max: 1 00:27:39.792 Min: 1 00:27:39.792 Completion Queue Entry Size 00:27:39.792 Max: 1 00:27:39.792 Min: 1 00:27:39.792 Number of Namespaces: 0 00:27:39.792 Compare Command: Not Supported 00:27:39.792 Write Uncorrectable Command: Not Supported 00:27:39.792 Dataset Management Command: Not Supported 00:27:39.792 Write Zeroes Command: Not Supported 00:27:39.792 Set Features Save Field: Not Supported 00:27:39.792 Reservations: Not Supported 00:27:39.792 Timestamp: Not Supported 00:27:39.792 Copy: Not Supported 00:27:39.792 Volatile Write Cache: Not Present 00:27:39.792 Atomic Write Unit (Normal): 1 00:27:39.792 Atomic Write Unit (PFail): 1 00:27:39.792 Atomic Compare & Write Unit: 1 00:27:39.792 Fused Compare & Write: Not Supported 00:27:39.792 Scatter-Gather List 00:27:39.792 SGL Command Set: Supported 00:27:39.792 SGL Keyed: Not Supported 00:27:39.792 SGL Bit Bucket Descriptor: Not Supported 00:27:39.792 SGL Metadata Pointer: Not Supported 00:27:39.792 Oversized SGL: Not Supported 00:27:39.792 SGL Metadata Address: Not Supported 00:27:39.792 SGL Offset: Supported 00:27:39.792 Transport SGL Data Block: Not Supported 00:27:39.792 Replay Protected Memory Block: Not Supported 00:27:39.792 00:27:39.792 Firmware Slot Information 00:27:39.792 ========================= 00:27:39.792 Active slot: 0 00:27:39.792 00:27:39.792 00:27:39.792 Error Log 00:27:39.792 ========= 00:27:39.792 00:27:39.792 Active Namespaces 00:27:39.792 ================= 00:27:39.792 Discovery Log Page 00:27:39.792 ================== 00:27:39.792 Generation Counter: 2 00:27:39.792 Number of Records: 2 00:27:39.792 Record Format: 0 00:27:39.792 00:27:39.792 Discovery Log Entry 0 00:27:39.792 ---------------------- 00:27:39.792 Transport Type: 3 (TCP) 00:27:39.792 Address Family: 1 (IPv4) 00:27:39.792 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:39.792 Entry Flags: 00:27:39.792 Duplicate Returned Information: 0 00:27:39.792 Explicit Persistent Connection Support for Discovery: 0 00:27:39.792 Transport Requirements: 00:27:39.792 Secure Channel: Not Specified 00:27:39.793 Port ID: 1 (0x0001) 00:27:39.793 Controller ID: 65535 (0xffff) 00:27:39.793 Admin Max SQ Size: 32 00:27:39.793 Transport Service Identifier: 4420 00:27:39.793 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:39.793 Transport Address: 10.0.0.1 00:27:39.793 Discovery Log Entry 1 00:27:39.793 ---------------------- 00:27:39.793 Transport Type: 3 (TCP) 00:27:39.793 Address Family: 1 (IPv4) 00:27:39.793 Subsystem Type: 2 (NVM Subsystem) 00:27:39.793 Entry Flags: 00:27:39.793 Duplicate Returned Information: 0 00:27:39.793 Explicit Persistent Connection Support for Discovery: 0 00:27:39.793 Transport Requirements: 00:27:39.793 Secure Channel: Not Specified 00:27:39.793 Port ID: 1 (0x0001) 00:27:39.793 Controller ID: 65535 (0xffff) 00:27:39.793 Admin Max SQ Size: 32 00:27:39.793 Transport Service Identifier: 4420 00:27:39.793 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:39.793 Transport Address: 10.0.0.1 00:27:39.793 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:39.793 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.793 get_feature(0x01) failed 00:27:39.793 get_feature(0x02) failed 00:27:39.793 get_feature(0x04) failed 00:27:39.793 ===================================================== 00:27:39.793 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:39.793 ===================================================== 00:27:39.793 Controller Capabilities/Features 00:27:39.793 ================================ 00:27:39.793 Vendor ID: 0000 00:27:39.793 Subsystem Vendor ID: 0000 00:27:39.793 Serial Number: 527cccd07585042c75c7 00:27:39.793 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:39.793 Firmware Version: 6.7.0-68 00:27:39.793 Recommended Arb Burst: 6 00:27:39.793 IEEE OUI Identifier: 00 00 00 00:27:39.793 Multi-path I/O 00:27:39.793 May have multiple subsystem ports: Yes 00:27:39.793 May have multiple controllers: Yes 00:27:39.793 Associated with SR-IOV VF: No 00:27:39.793 Max Data Transfer Size: Unlimited 00:27:39.793 Max Number of Namespaces: 1024 00:27:39.793 Max Number of I/O Queues: 128 00:27:39.793 NVMe Specification Version (VS): 1.3 00:27:39.793 NVMe Specification Version (Identify): 1.3 00:27:39.793 Maximum Queue Entries: 1024 00:27:39.793 Contiguous Queues Required: No 00:27:39.793 Arbitration Mechanisms Supported 00:27:39.793 Weighted Round Robin: Not Supported 00:27:39.793 Vendor Specific: Not Supported 00:27:39.793 Reset Timeout: 7500 ms 00:27:39.793 Doorbell Stride: 4 bytes 00:27:39.793 NVM Subsystem Reset: Not Supported 00:27:39.793 Command Sets Supported 00:27:39.793 NVM Command Set: Supported 00:27:39.793 Boot Partition: Not Supported 00:27:39.793 Memory Page Size Minimum: 4096 bytes 00:27:39.793 Memory Page Size Maximum: 4096 bytes 00:27:39.793 Persistent Memory Region: Not Supported 00:27:39.793 Optional Asynchronous Events Supported 00:27:39.793 Namespace Attribute Notices: Supported 00:27:39.793 Firmware Activation Notices: Not Supported 00:27:39.793 ANA Change Notices: Supported 00:27:39.793 PLE Aggregate Log Change Notices: Not Supported 00:27:39.793 LBA Status Info Alert Notices: Not Supported 00:27:39.793 EGE Aggregate Log Change Notices: Not Supported 00:27:39.793 Normal NVM Subsystem Shutdown event: Not Supported 00:27:39.793 Zone Descriptor Change Notices: Not Supported 00:27:39.793 Discovery Log Change Notices: Not Supported 00:27:39.793 Controller Attributes 00:27:39.793 128-bit Host Identifier: Supported 00:27:39.793 Non-Operational Permissive Mode: Not Supported 00:27:39.793 NVM Sets: Not Supported 00:27:39.793 Read Recovery Levels: Not Supported 00:27:39.793 Endurance Groups: Not Supported 00:27:39.793 Predictable Latency Mode: Not Supported 00:27:39.793 Traffic Based Keep ALive: Supported 00:27:39.793 Namespace Granularity: Not Supported 00:27:39.793 SQ Associations: Not Supported 00:27:39.793 UUID List: Not Supported 00:27:39.793 Multi-Domain Subsystem: Not Supported 00:27:39.793 Fixed Capacity Management: Not Supported 00:27:39.793 Variable Capacity Management: Not Supported 00:27:39.793 Delete Endurance Group: Not Supported 00:27:39.793 Delete NVM Set: Not Supported 00:27:39.793 Extended LBA Formats Supported: Not Supported 00:27:39.793 Flexible Data Placement Supported: Not Supported 00:27:39.793 00:27:39.793 Controller Memory Buffer Support 00:27:39.793 ================================ 00:27:39.793 Supported: No 00:27:39.793 00:27:39.793 Persistent Memory Region Support 00:27:39.793 ================================ 00:27:39.793 Supported: No 00:27:39.793 00:27:39.793 Admin Command Set Attributes 00:27:39.794 ============================ 00:27:39.794 Security Send/Receive: Not Supported 00:27:39.794 Format NVM: Not Supported 00:27:39.794 Firmware Activate/Download: Not Supported 00:27:39.794 Namespace Management: Not Supported 00:27:39.794 Device Self-Test: Not Supported 00:27:39.794 Directives: Not Supported 00:27:39.794 NVMe-MI: Not Supported 00:27:39.794 Virtualization Management: Not Supported 00:27:39.794 Doorbell Buffer Config: Not Supported 00:27:39.794 Get LBA Status Capability: Not Supported 00:27:39.794 Command & Feature Lockdown Capability: Not Supported 00:27:39.794 Abort Command Limit: 4 00:27:39.794 Async Event Request Limit: 4 00:27:39.794 Number of Firmware Slots: N/A 00:27:39.794 Firmware Slot 1 Read-Only: N/A 00:27:39.794 Firmware Activation Without Reset: N/A 00:27:39.794 Multiple Update Detection Support: N/A 00:27:39.794 Firmware Update Granularity: No Information Provided 00:27:39.794 Per-Namespace SMART Log: Yes 00:27:39.794 Asymmetric Namespace Access Log Page: Supported 00:27:39.794 ANA Transition Time : 10 sec 00:27:39.794 00:27:39.794 Asymmetric Namespace Access Capabilities 00:27:39.794 ANA Optimized State : Supported 00:27:39.794 ANA Non-Optimized State : Supported 00:27:39.794 ANA Inaccessible State : Supported 00:27:39.794 ANA Persistent Loss State : Supported 00:27:39.794 ANA Change State : Supported 00:27:39.794 ANAGRPID is not changed : No 00:27:39.794 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:39.794 00:27:39.794 ANA Group Identifier Maximum : 128 00:27:39.794 Number of ANA Group Identifiers : 128 00:27:39.794 Max Number of Allowed Namespaces : 1024 00:27:39.794 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:39.794 Command Effects Log Page: Supported 00:27:39.794 Get Log Page Extended Data: Supported 00:27:39.794 Telemetry Log Pages: Not Supported 00:27:39.794 Persistent Event Log Pages: Not Supported 00:27:39.794 Supported Log Pages Log Page: May Support 00:27:39.794 Commands Supported & Effects Log Page: Not Supported 00:27:39.794 Feature Identifiers & Effects Log Page:May Support 00:27:39.794 NVMe-MI Commands & Effects Log Page: May Support 00:27:39.794 Data Area 4 for Telemetry Log: Not Supported 00:27:39.794 Error Log Page Entries Supported: 128 00:27:39.794 Keep Alive: Supported 00:27:39.794 Keep Alive Granularity: 1000 ms 00:27:39.794 00:27:39.794 NVM Command Set Attributes 00:27:39.794 ========================== 00:27:39.794 Submission Queue Entry Size 00:27:39.794 Max: 64 00:27:39.794 Min: 64 00:27:39.794 Completion Queue Entry Size 00:27:39.794 Max: 16 00:27:39.794 Min: 16 00:27:39.794 Number of Namespaces: 1024 00:27:39.794 Compare Command: Not Supported 00:27:39.794 Write Uncorrectable Command: Not Supported 00:27:39.794 Dataset Management Command: Supported 00:27:39.794 Write Zeroes Command: Supported 00:27:39.794 Set Features Save Field: Not Supported 00:27:39.794 Reservations: Not Supported 00:27:39.794 Timestamp: Not Supported 00:27:39.794 Copy: Not Supported 00:27:39.794 Volatile Write Cache: Present 00:27:39.794 Atomic Write Unit (Normal): 1 00:27:39.794 Atomic Write Unit (PFail): 1 00:27:39.794 Atomic Compare & Write Unit: 1 00:27:39.794 Fused Compare & Write: Not Supported 00:27:39.794 Scatter-Gather List 00:27:39.794 SGL Command Set: Supported 00:27:39.794 SGL Keyed: Not Supported 00:27:39.794 SGL Bit Bucket Descriptor: Not Supported 00:27:39.794 SGL Metadata Pointer: Not Supported 00:27:39.794 Oversized SGL: Not Supported 00:27:39.794 SGL Metadata Address: Not Supported 00:27:39.794 SGL Offset: Supported 00:27:39.794 Transport SGL Data Block: Not Supported 00:27:39.794 Replay Protected Memory Block: Not Supported 00:27:39.794 00:27:39.794 Firmware Slot Information 00:27:39.794 ========================= 00:27:39.794 Active slot: 0 00:27:39.794 00:27:39.794 Asymmetric Namespace Access 00:27:39.794 =========================== 00:27:39.794 Change Count : 0 00:27:39.794 Number of ANA Group Descriptors : 1 00:27:39.794 ANA Group Descriptor : 0 00:27:39.794 ANA Group ID : 1 00:27:39.794 Number of NSID Values : 1 00:27:39.794 Change Count : 0 00:27:39.794 ANA State : 1 00:27:39.794 Namespace Identifier : 1 00:27:39.794 00:27:39.794 Commands Supported and Effects 00:27:39.794 ============================== 00:27:39.794 Admin Commands 00:27:39.794 -------------- 00:27:39.794 Get Log Page (02h): Supported 00:27:39.794 Identify (06h): Supported 00:27:39.794 Abort (08h): Supported 00:27:39.794 Set Features (09h): Supported 00:27:39.794 Get Features (0Ah): Supported 00:27:39.794 Asynchronous Event Request (0Ch): Supported 00:27:39.794 Keep Alive (18h): Supported 00:27:39.794 I/O Commands 00:27:39.794 ------------ 00:27:39.794 Flush (00h): Supported 00:27:39.794 Write (01h): Supported LBA-Change 00:27:39.794 Read (02h): Supported 00:27:39.795 Write Zeroes (08h): Supported LBA-Change 00:27:39.795 Dataset Management (09h): Supported 00:27:39.795 00:27:39.795 Error Log 00:27:39.795 ========= 00:27:39.795 Entry: 0 00:27:39.795 Error Count: 0x3 00:27:39.795 Submission Queue Id: 0x0 00:27:39.795 Command Id: 0x5 00:27:39.795 Phase Bit: 0 00:27:39.795 Status Code: 0x2 00:27:39.795 Status Code Type: 0x0 00:27:39.795 Do Not Retry: 1 00:27:39.795 Error Location: 0x28 00:27:39.795 LBA: 0x0 00:27:39.795 Namespace: 0x0 00:27:39.795 Vendor Log Page: 0x0 00:27:39.795 ----------- 00:27:39.795 Entry: 1 00:27:39.795 Error Count: 0x2 00:27:39.795 Submission Queue Id: 0x0 00:27:39.795 Command Id: 0x5 00:27:39.795 Phase Bit: 0 00:27:39.795 Status Code: 0x2 00:27:39.795 Status Code Type: 0x0 00:27:39.795 Do Not Retry: 1 00:27:39.795 Error Location: 0x28 00:27:39.795 LBA: 0x0 00:27:39.795 Namespace: 0x0 00:27:39.795 Vendor Log Page: 0x0 00:27:39.795 ----------- 00:27:39.795 Entry: 2 00:27:39.795 Error Count: 0x1 00:27:39.795 Submission Queue Id: 0x0 00:27:39.795 Command Id: 0x4 00:27:39.795 Phase Bit: 0 00:27:39.795 Status Code: 0x2 00:27:39.795 Status Code Type: 0x0 00:27:39.795 Do Not Retry: 1 00:27:39.795 Error Location: 0x28 00:27:39.795 LBA: 0x0 00:27:39.795 Namespace: 0x0 00:27:39.795 Vendor Log Page: 0x0 00:27:39.795 00:27:39.795 Number of Queues 00:27:39.795 ================ 00:27:39.795 Number of I/O Submission Queues: 128 00:27:39.795 Number of I/O Completion Queues: 128 00:27:39.795 00:27:39.795 ZNS Specific Controller Data 00:27:39.795 ============================ 00:27:39.795 Zone Append Size Limit: 0 00:27:39.795 00:27:39.795 00:27:39.795 Active Namespaces 00:27:39.795 ================= 00:27:39.795 get_feature(0x05) failed 00:27:39.795 Namespace ID:1 00:27:39.795 Command Set Identifier: NVM (00h) 00:27:39.795 Deallocate: Supported 00:27:39.795 Deallocated/Unwritten Error: Not Supported 00:27:39.795 Deallocated Read Value: Unknown 00:27:39.795 Deallocate in Write Zeroes: Not Supported 00:27:39.795 Deallocated Guard Field: 0xFFFF 00:27:39.795 Flush: Supported 00:27:39.795 Reservation: Not Supported 00:27:39.795 Namespace Sharing Capabilities: Multiple Controllers 00:27:39.795 Size (in LBAs): 3750748848 (1788GiB) 00:27:39.795 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:39.795 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:39.795 UUID: e5d4a9f8-d05c-4847-a334-1b5b5b7d5958 00:27:39.795 Thin Provisioning: Not Supported 00:27:39.795 Per-NS Atomic Units: Yes 00:27:39.795 Atomic Write Unit (Normal): 8 00:27:39.795 Atomic Write Unit (PFail): 8 00:27:39.795 Preferred Write Granularity: 8 00:27:39.795 Atomic Compare & Write Unit: 8 00:27:39.795 Atomic Boundary Size (Normal): 0 00:27:39.795 Atomic Boundary Size (PFail): 0 00:27:39.795 Atomic Boundary Offset: 0 00:27:39.795 NGUID/EUI64 Never Reused: No 00:27:39.795 ANA group ID: 1 00:27:39.795 Namespace Write Protected: No 00:27:39.795 Number of LBA Formats: 1 00:27:39.795 Current LBA Format: LBA Format #00 00:27:39.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:39.795 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.795 rmmod nvme_tcp 00:27:39.795 rmmod nvme_fabrics 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.795 00:38:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:42.339 00:38:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:46.543 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:46.543 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:46.543 00:27:46.543 real 0m20.000s 00:27:46.543 user 0m5.559s 00:27:46.543 sys 0m11.540s 00:27:46.543 00:38:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.543 00:38:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.543 ************************************ 00:27:46.543 END TEST nvmf_identify_kernel_target 00:27:46.543 ************************************ 00:27:46.543 00:38:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:46.543 00:38:59 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:46.543 00:38:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:46.543 00:38:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.543 00:38:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:46.543 ************************************ 00:27:46.543 START TEST nvmf_auth_host 00:27:46.543 ************************************ 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:46.543 * Looking for test storage... 00:27:46.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:46.543 00:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.761 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.761 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.761 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:54.762 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:54.762 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:54.762 Found net devices under 0000:31:00.0: cvl_0_0 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:54.762 Found net devices under 0000:31:00.1: cvl_0_1 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.762 00:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:27:54.762 00:27:54.762 --- 10.0.0.2 ping statistics --- 00:27:54.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.762 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:27:54.762 00:27:54.762 --- 10.0.0.1 ping statistics --- 00:27:54.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.762 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1255519 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1255519 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1255519 ']' 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.762 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.334 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.334 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:55.334 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.334 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:55.334 00:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1dd302852ffb2533cd0ab644d33a58c4 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mcm 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1dd302852ffb2533cd0ab644d33a58c4 0 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1dd302852ffb2533cd0ab644d33a58c4 0 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.595 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1dd302852ffb2533cd0ab644d33a58c4 00:27:55.596 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:55.596 00:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mcm 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mcm 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.mcm 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bf36aedb28925b74b51eff58c4a142b8d9441dbab725e3c08c38d10e451afa3 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.S3y 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bf36aedb28925b74b51eff58c4a142b8d9441dbab725e3c08c38d10e451afa3 3 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bf36aedb28925b74b51eff58c4a142b8d9441dbab725e3c08c38d10e451afa3 3 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bf36aedb28925b74b51eff58c4a142b8d9441dbab725e3c08c38d10e451afa3 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.S3y 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.S3y 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.S3y 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2a90f8c132c1fa194f1c76a492f2114b0168c749b3cd8a6a 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AcR 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2a90f8c132c1fa194f1c76a492f2114b0168c749b3cd8a6a 0 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2a90f8c132c1fa194f1c76a492f2114b0168c749b3cd8a6a 0 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2a90f8c132c1fa194f1c76a492f2114b0168c749b3cd8a6a 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AcR 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AcR 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AcR 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90ef1cb3593a947c4c456c61698d25efe7aca5809e6ed786 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lJI 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90ef1cb3593a947c4c456c61698d25efe7aca5809e6ed786 2 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90ef1cb3593a947c4c456c61698d25efe7aca5809e6ed786 2 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90ef1cb3593a947c4c456c61698d25efe7aca5809e6ed786 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lJI 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lJI 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.lJI 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=694ebffebbc563ca25d3dbe9cecb5a8f 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FwU 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 694ebffebbc563ca25d3dbe9cecb5a8f 1 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 694ebffebbc563ca25d3dbe9cecb5a8f 1 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=694ebffebbc563ca25d3dbe9cecb5a8f 00:27:55.596 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FwU 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FwU 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FwU 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f7238afbd36c86f3fcbf249a367500e4 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dLR 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f7238afbd36c86f3fcbf249a367500e4 1 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f7238afbd36c86f3fcbf249a367500e4 1 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f7238afbd36c86f3fcbf249a367500e4 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dLR 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dLR 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.dLR 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=833378ab64d01707a87fba164fdecae9c0e49a0cb1700582 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DvE 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 833378ab64d01707a87fba164fdecae9c0e49a0cb1700582 2 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 833378ab64d01707a87fba164fdecae9c0e49a0cb1700582 2 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=833378ab64d01707a87fba164fdecae9c0e49a0cb1700582 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DvE 00:27:55.857 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DvE 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.DvE 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5408fbe4b31784a6d5b4483c0cdc3225 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PLu 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5408fbe4b31784a6d5b4483c0cdc3225 0 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5408fbe4b31784a6d5b4483c0cdc3225 0 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5408fbe4b31784a6d5b4483c0cdc3225 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PLu 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PLu 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.PLu 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3a56546f6098cdc8c15bc01d46e5dcec126c8165ba94f092bc552465300360fc 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rEJ 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3a56546f6098cdc8c15bc01d46e5dcec126c8165ba94f092bc552465300360fc 3 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3a56546f6098cdc8c15bc01d46e5dcec126c8165ba94f092bc552465300360fc 3 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3a56546f6098cdc8c15bc01d46e5dcec126c8165ba94f092bc552465300360fc 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:55.858 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rEJ 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rEJ 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rEJ 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1255519 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1255519 ']' 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.118 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mcm 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.S3y ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.S3y 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AcR 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.lJI ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lJI 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FwU 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.dLR ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dLR 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.DvE 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.PLu ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.PLu 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rEJ 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.119 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:56.380 00:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:00.585 Waiting for block devices as requested 00:28:00.585 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:00.585 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:00.845 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:00.845 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:00.845 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:00.845 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:01.106 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:01.106 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:01.106 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:01.366 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:01.938 No valid GPT data, bailing 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:01.938 00:28:01.938 Discovery Log Number of Records 2, Generation counter 2 00:28:01.938 =====Discovery Log Entry 0====== 00:28:01.938 trtype: tcp 00:28:01.938 adrfam: ipv4 00:28:01.938 subtype: current discovery subsystem 00:28:01.938 treq: not specified, sq flow control disable supported 00:28:01.938 portid: 1 00:28:01.938 trsvcid: 4420 00:28:01.938 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:01.938 traddr: 10.0.0.1 00:28:01.938 eflags: none 00:28:01.938 sectype: none 00:28:01.938 =====Discovery Log Entry 1====== 00:28:01.938 trtype: tcp 00:28:01.938 adrfam: ipv4 00:28:01.938 subtype: nvme subsystem 00:28:01.938 treq: not specified, sq flow control disable supported 00:28:01.938 portid: 1 00:28:01.938 trsvcid: 4420 00:28:01.938 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:01.938 traddr: 10.0.0.1 00:28:01.938 eflags: none 00:28:01.938 sectype: none 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:01.938 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.939 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.200 nvme0n1 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.200 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.201 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.462 nvme0n1 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.462 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.463 00:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.725 nvme0n1 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.725 nvme0n1 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.725 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.987 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.988 nvme0n1 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.988 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.249 nvme0n1 00:28:03.249 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.250 00:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.511 nvme0n1 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.511 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.512 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.773 nvme0n1 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.773 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.034 nvme0n1 00:28:04.034 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.034 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.034 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.034 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.035 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.296 nvme0n1 00:28:04.296 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.296 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.296 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.296 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.296 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.296 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.297 00:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.558 nvme0n1 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.558 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.820 nvme0n1 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.820 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.082 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.344 nvme0n1 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.344 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.345 00:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.605 nvme0n1 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.605 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.865 nvme0n1 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.865 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.124 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.383 nvme0n1 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.383 00:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.952 nvme0n1 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.952 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.211 nvme0n1 00:28:07.211 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.507 00:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.767 nvme0n1 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.767 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.768 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.337 nvme0n1 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.337 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.338 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.338 00:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.338 00:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.338 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.338 00:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.908 nvme0n1 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.908 00:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.851 nvme0n1 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.851 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.852 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.422 nvme0n1 00:28:10.422 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.422 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.422 00:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.422 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.422 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.422 00:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.422 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.682 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.682 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.682 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.682 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.682 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.682 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.252 nvme0n1 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.252 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.253 00:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.196 nvme0n1 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.196 00:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.768 nvme0n1 00:28:12.768 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.768 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.768 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.768 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.768 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.029 nvme0n1 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.029 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.291 nvme0n1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.291 00:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.552 nvme0n1 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.552 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.553 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.813 nvme0n1 00:28:13.813 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.813 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.813 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.814 nvme0n1 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.814 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.074 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.075 nvme0n1 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.075 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.336 nvme0n1 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.336 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.597 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.597 00:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.597 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.597 00:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.597 nvme0n1 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.597 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.858 nvme0n1 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.858 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:15.118 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.119 nvme0n1 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.119 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.379 00:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.639 nvme0n1 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.639 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.640 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 nvme0n1 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.900 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.161 nvme0n1 00:28:16.161 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.161 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.161 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.161 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.161 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.161 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.422 00:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.683 nvme0n1 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.683 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.684 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.945 nvme0n1 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.945 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.946 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.517 nvme0n1 00:28:17.517 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.517 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.517 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.517 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.517 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.517 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.517 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.518 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.090 nvme0n1 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.090 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.662 nvme0n1 00:28:18.662 00:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.662 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.922 nvme0n1 00:28:18.922 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.922 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.922 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.922 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.922 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.922 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.184 00:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.445 nvme0n1 00:28:19.445 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.445 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.445 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.445 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.445 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.445 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.707 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.277 nvme0n1 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.277 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.537 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.108 nvme0n1 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:21.108 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.109 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.051 nvme0n1 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.051 00:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.991 nvme0n1 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.991 00:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.561 nvme0n1 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.561 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.821 nvme0n1 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.821 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.095 nvme0n1 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.095 nvme0n1 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.095 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 nvme0n1 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.364 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.365 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.625 nvme0n1 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.625 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.885 nvme0n1 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.885 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.886 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 nvme0n1 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.146 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.407 nvme0n1 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.407 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.668 nvme0n1 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.668 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.929 nvme0n1 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.929 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.190 nvme0n1 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.190 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.451 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.712 nvme0n1 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.712 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.973 nvme0n1 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.973 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.234 nvme0n1 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.234 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.495 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.756 nvme0n1 00:28:27.756 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.756 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.756 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.756 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.756 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.756 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.757 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.017 nvme0n1 00:28:28.017 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.017 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.017 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.017 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.017 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.017 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:28.278 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.279 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.540 nvme0n1 00:28:28.540 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.540 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.540 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.540 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.540 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.540 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.801 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.801 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.802 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.063 nvme0n1 00:28:29.063 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.063 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.063 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.063 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.064 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.064 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.325 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.586 nvme0n1 00:28:29.586 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.586 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.586 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.586 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.586 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.586 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.846 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.106 nvme0n1 00:28:30.106 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.106 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.106 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.106 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.106 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkMzAyODUyZmZiMjUzM2NkMGFiNjQ0ZDMzYTU4YzR6qCfH: 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWJmMzZhZWRiMjg5MjViNzRiNTFlZmY1OGM0YTE0MmI4ZDk0NDFkYmFiNzI1ZTNjMDhjMzhkMTBlNDUxYWZhM1nnD2o=: 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.366 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.367 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.937 nvme0n1 00:28:30.937 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.937 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.937 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.937 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.937 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.937 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.198 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.198 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.199 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.771 nvme0n1 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.771 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Njk0ZWJmZmViYmM1NjNjYTI1ZDNkYmU5Y2VjYjVhOGaT0cL/: 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: ]] 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyMzhhZmJkMzZjODZmM2ZjYmYyNDlhMzY3NTAwZTTA55Om: 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.032 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.605 nvme0n1 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODMzMzc4YWI2NGQwMTcwN2E4N2ZiYTE2NGZkZWNhZTljMGU0OWEwY2IxNzAwNTgy3d9GCQ==: 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: ]] 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwOGZiZTRiMzE3ODRhNmQ1YjQ0ODNjMGNkYzMyMjW5xbEG: 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.605 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.867 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.439 nvme0n1 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E1NjU0NmY2MDk4Y2RjOGMxNWJjMDFkNDZlNWRjZWMxMjZjODE2NWJhOTRmMDkyYmM1NTI0NjUzMDAzNjBmY4IWvpE=: 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.439 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 nvme0n1 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE5MGY4YzEzMmMxZmExOTRmMWM3NmE0OTJmMjExNGIwMTY4Yzc0OWIzY2Q4YTZhCZVgvw==: 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBlZjFjYjM1OTNhOTQ3YzRjNDU2YzYxNjk4ZDI1ZWZlN2FjYTU4MDllNmVkNzg2z1bl8Q==: 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 request: 00:28:34.384 { 00:28:34.384 "name": "nvme0", 00:28:34.384 "trtype": "tcp", 00:28:34.384 "traddr": "10.0.0.1", 00:28:34.384 "adrfam": "ipv4", 00:28:34.384 "trsvcid": "4420", 00:28:34.384 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:34.384 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:34.384 "prchk_reftag": false, 00:28:34.384 "prchk_guard": false, 00:28:34.384 "hdgst": false, 00:28:34.384 "ddgst": false, 00:28:34.384 "method": "bdev_nvme_attach_controller", 00:28:34.384 "req_id": 1 00:28:34.384 } 00:28:34.384 Got JSON-RPC error response 00:28:34.384 response: 00:28:34.384 { 00:28:34.384 "code": -5, 00:28:34.384 "message": "Input/output error" 00:28:34.384 } 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 request: 00:28:34.384 { 00:28:34.384 "name": "nvme0", 00:28:34.384 "trtype": "tcp", 00:28:34.384 "traddr": "10.0.0.1", 00:28:34.384 "adrfam": "ipv4", 00:28:34.384 "trsvcid": "4420", 00:28:34.384 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:34.384 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:34.384 "prchk_reftag": false, 00:28:34.384 "prchk_guard": false, 00:28:34.384 "hdgst": false, 00:28:34.384 "ddgst": false, 00:28:34.384 "dhchap_key": "key2", 00:28:34.384 "method": "bdev_nvme_attach_controller", 00:28:34.384 "req_id": 1 00:28:34.384 } 00:28:34.384 Got JSON-RPC error response 00:28:34.384 response: 00:28:34.384 { 00:28:34.384 "code": -5, 00:28:34.384 "message": "Input/output error" 00:28:34.384 } 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:34.384 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.385 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.646 request: 00:28:34.646 { 00:28:34.646 "name": "nvme0", 00:28:34.646 "trtype": "tcp", 00:28:34.646 "traddr": "10.0.0.1", 00:28:34.646 "adrfam": "ipv4", 00:28:34.646 "trsvcid": "4420", 00:28:34.646 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:34.646 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:34.646 "prchk_reftag": false, 00:28:34.646 "prchk_guard": false, 00:28:34.646 "hdgst": false, 00:28:34.646 "ddgst": false, 00:28:34.646 "dhchap_key": "key1", 00:28:34.646 "dhchap_ctrlr_key": "ckey2", 00:28:34.646 "method": "bdev_nvme_attach_controller", 00:28:34.646 "req_id": 1 00:28:34.646 } 00:28:34.646 Got JSON-RPC error response 00:28:34.646 response: 00:28:34.646 { 00:28:34.646 "code": -5, 00:28:34.646 "message": "Input/output error" 00:28:34.646 } 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:34.646 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:34.646 rmmod nvme_tcp 00:28:34.646 rmmod nvme_fabrics 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1255519 ']' 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1255519 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1255519 ']' 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1255519 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255519 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255519' 00:28:34.647 killing process with pid 1255519 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1255519 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1255519 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.647 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:37.192 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:41.401 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:41.401 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:41.401 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.mcm /tmp/spdk.key-null.AcR /tmp/spdk.key-sha256.FwU /tmp/spdk.key-sha384.DvE /tmp/spdk.key-sha512.rEJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:41.401 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:44.699 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:44.699 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:44.699 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:44.959 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:44.959 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:44.959 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:44.959 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:44.959 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:44.959 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:44.959 00:28:44.959 real 0m58.741s 00:28:44.959 user 0m51.698s 00:28:44.959 sys 0m16.168s 00:28:44.959 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.959 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.959 ************************************ 00:28:44.959 END TEST nvmf_auth_host 00:28:44.959 ************************************ 00:28:44.959 00:39:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:44.959 00:39:58 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:44.959 00:39:58 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:44.959 00:39:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:44.959 00:39:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.960 00:39:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:44.960 ************************************ 00:28:44.960 START TEST nvmf_digest 00:28:44.960 ************************************ 00:28:44.960 00:39:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:45.220 * Looking for test storage... 00:28:45.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.220 00:39:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:45.221 00:39:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:53.359 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.359 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:53.360 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:53.360 Found net devices under 0000:31:00.0: cvl_0_0 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:53.360 Found net devices under 0000:31:00.1: cvl_0_1 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:53.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:28:53.360 00:28:53.360 --- 10.0.0.2 ping statistics --- 00:28:53.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.360 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:28:53.360 00:28:53.360 --- 10.0.0.1 ping statistics --- 00:28:53.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.360 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:53.360 00:40:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.621 ************************************ 00:28:53.621 START TEST nvmf_digest_clean 00:28:53.621 ************************************ 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1273284 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1273284 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1273284 ']' 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:53.621 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.622 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:53.622 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.622 [2024-07-16 00:40:07.118318] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:53.622 [2024-07-16 00:40:07.118378] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.622 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.622 [2024-07-16 00:40:07.195638] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.882 [2024-07-16 00:40:07.259453] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.882 [2024-07-16 00:40:07.259491] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.882 [2024-07-16 00:40:07.259499] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.882 [2024-07-16 00:40:07.259505] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.882 [2024-07-16 00:40:07.259511] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.882 [2024-07-16 00:40:07.259531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.882 null0 00:28:53.882 [2024-07-16 00:40:07.404826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.882 [2024-07-16 00:40:07.428997] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:53.882 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1273447 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1273447 /var/tmp/bperf.sock 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1273447 ']' 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:53.883 00:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.883 [2024-07-16 00:40:07.484594] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:53.883 [2024-07-16 00:40:07.484639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273447 ] 00:28:53.883 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.144 [2024-07-16 00:40:07.568734] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.144 [2024-07-16 00:40:07.632787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.715 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:54.715 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:54.715 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:54.715 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:54.715 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.975 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.975 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.235 nvme0n1 00:28:55.235 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:55.235 00:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.235 Running I/O for 2 seconds... 00:28:57.780 00:28:57.780 Latency(us) 00:28:57.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.780 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:57.780 nvme0n1 : 2.04 20199.58 78.90 0.00 0.00 6239.48 2880.85 46530.56 00:28:57.780 =================================================================================================================== 00:28:57.780 Total : 20199.58 78.90 0.00 0.00 6239.48 2880.85 46530.56 00:28:57.780 0 00:28:57.780 00:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:57.780 00:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:57.780 00:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:57.780 00:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:57.780 | select(.opcode=="crc32c") 00:28:57.780 | "\(.module_name) \(.executed)"' 00:28:57.780 00:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1273447 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1273447 ']' 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1273447 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1273447 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1273447' 00:28:57.780 killing process with pid 1273447 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1273447 00:28:57.780 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.780 00:28:57.780 Latency(us) 00:28:57.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.780 =================================================================================================================== 00:28:57.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1273447 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:57.780 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1274243 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1274243 /var/tmp/bperf.sock 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1274243 ']' 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.781 00:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.781 [2024-07-16 00:40:11.291568] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:57.781 [2024-07-16 00:40:11.291627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274243 ] 00:28:57.781 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:57.781 Zero copy mechanism will not be used. 00:28:57.781 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.781 [2024-07-16 00:40:11.369964] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.041 [2024-07-16 00:40:11.423610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.613 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:58.613 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:58.613 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:58.613 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:58.613 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.873 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.873 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.134 nvme0n1 00:28:59.134 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:59.134 00:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.134 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.134 Zero copy mechanism will not be used. 00:28:59.134 Running I/O for 2 seconds... 00:29:01.683 00:29:01.683 Latency(us) 00:29:01.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.683 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:01.683 nvme0n1 : 2.00 2883.21 360.40 0.00 0.00 5546.50 1966.08 11632.64 00:29:01.683 =================================================================================================================== 00:29:01.683 Total : 2883.21 360.40 0.00 0.00 5546.50 1966.08 11632.64 00:29:01.683 0 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:01.683 | select(.opcode=="crc32c") 00:29:01.683 | "\(.module_name) \(.executed)"' 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1274243 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1274243 ']' 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1274243 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1274243 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1274243' 00:29:01.683 killing process with pid 1274243 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1274243 00:29:01.683 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.683 00:29:01.683 Latency(us) 00:29:01.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.683 =================================================================================================================== 00:29:01.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.683 00:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1274243 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1274982 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1274982 /var/tmp/bperf.sock 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1274982 ']' 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.683 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.683 [2024-07-16 00:40:15.097170] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:01.683 [2024-07-16 00:40:15.097235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274982 ] 00:29:01.683 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.683 [2024-07-16 00:40:15.176494] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.683 [2024-07-16 00:40:15.229958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.253 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.253 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:02.253 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:02.253 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:02.253 00:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:02.515 00:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.515 00:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.086 nvme0n1 00:29:03.086 00:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:03.086 00:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.086 Running I/O for 2 seconds... 00:29:05.004 00:29:05.004 Latency(us) 00:29:05.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.004 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.004 nvme0n1 : 2.00 21855.52 85.37 0.00 0.00 5848.74 2211.84 13271.04 00:29:05.004 =================================================================================================================== 00:29:05.004 Total : 21855.52 85.37 0.00 0.00 5848.74 2211.84 13271.04 00:29:05.004 0 00:29:05.004 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:05.004 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:05.004 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:05.004 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:05.004 | select(.opcode=="crc32c") 00:29:05.004 | "\(.module_name) \(.executed)"' 00:29:05.004 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1274982 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1274982 ']' 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1274982 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1274982 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1274982' 00:29:05.300 killing process with pid 1274982 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1274982 00:29:05.300 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.300 00:29:05.300 Latency(us) 00:29:05.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.300 =================================================================================================================== 00:29:05.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1274982 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1275672 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1275672 /var/tmp/bperf.sock 00:29:05.300 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1275672 ']' 00:29:05.301 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:05.301 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.301 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.301 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.301 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.301 00:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:05.301 [2024-07-16 00:40:18.918319] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:05.301 [2024-07-16 00:40:18.918375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275672 ] 00:29:05.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.301 Zero copy mechanism will not be used. 00:29:05.562 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.562 [2024-07-16 00:40:18.998875] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.562 [2024-07-16 00:40:19.050735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.132 00:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:06.132 00:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:06.132 00:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:06.132 00:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:06.132 00:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:06.392 00:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.392 00:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.651 nvme0n1 00:29:06.651 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:06.651 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.651 Zero copy mechanism will not be used. 00:29:06.651 Running I/O for 2 seconds... 00:29:09.191 00:29:09.191 Latency(us) 00:29:09.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.191 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:09.191 nvme0n1 : 2.01 3281.47 410.18 0.00 0.00 4866.98 2088.96 13817.17 00:29:09.191 =================================================================================================================== 00:29:09.191 Total : 3281.47 410.18 0.00 0.00 4866.98 2088.96 13817.17 00:29:09.191 0 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:09.191 | select(.opcode=="crc32c") 00:29:09.191 | "\(.module_name) \(.executed)"' 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1275672 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1275672 ']' 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1275672 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1275672 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1275672' 00:29:09.191 killing process with pid 1275672 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1275672 00:29:09.191 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.191 00:29:09.191 Latency(us) 00:29:09.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.191 =================================================================================================================== 00:29:09.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1275672 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1273284 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1273284 ']' 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1273284 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1273284 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1273284' 00:29:09.191 killing process with pid 1273284 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1273284 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1273284 00:29:09.191 00:29:09.191 real 0m15.696s 00:29:09.191 user 0m31.208s 00:29:09.191 sys 0m3.334s 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:09.191 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.191 ************************************ 00:29:09.191 END TEST nvmf_digest_clean 00:29:09.192 ************************************ 00:29:09.192 00:40:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:09.192 00:40:22 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:09.192 00:40:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:09.192 00:40:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.192 00:40:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:09.192 ************************************ 00:29:09.192 START TEST nvmf_digest_error 00:29:09.192 ************************************ 00:29:09.192 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:09.192 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1276381 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1276381 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1276381 ']' 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.452 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.452 [2024-07-16 00:40:22.881184] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:09.452 [2024-07-16 00:40:22.881241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.452 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.452 [2024-07-16 00:40:22.955938] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.452 [2024-07-16 00:40:23.026049] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.452 [2024-07-16 00:40:23.026089] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.452 [2024-07-16 00:40:23.026096] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.452 [2024-07-16 00:40:23.026107] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.452 [2024-07-16 00:40:23.026113] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.452 [2024-07-16 00:40:23.026137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.023 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.023 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:10.023 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:10.023 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.023 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.284 [2024-07-16 00:40:23.692048] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.284 null0 00:29:10.284 [2024-07-16 00:40:23.772738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.284 [2024-07-16 00:40:23.796933] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1276723 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1276723 /var/tmp/bperf.sock 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1276723 ']' 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.284 00:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.285 [2024-07-16 00:40:23.852802] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:10.285 [2024-07-16 00:40:23.852854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276723 ] 00:29:10.285 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.546 [2024-07-16 00:40:23.931415] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.546 [2024-07-16 00:40:23.984901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.114 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:11.114 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:11.114 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.114 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.374 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:11.374 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.374 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.374 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.374 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.374 00:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.635 nvme0n1 00:29:11.635 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:11.635 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.635 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.635 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.635 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.635 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.635 Running I/O for 2 seconds... 00:29:11.635 [2024-07-16 00:40:25.263392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.635 [2024-07-16 00:40:25.263423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.635 [2024-07-16 00:40:25.263432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.277418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.277438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.277445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.290045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.290064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.290071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.302298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.302317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.302323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.315454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.315472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.315479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.327734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.327751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.327757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.340513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.340529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.340535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.352880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.352898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.352904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.365937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.365955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.365961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.377551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.377569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.377575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.389254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.389271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.389277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.402828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.402845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.402855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.413654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.413672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.413678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.425837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.425854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.425860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.438076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.438093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.438099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.451179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.451196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.451202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.462841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.462858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.462864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.476521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.476538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.476544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.487889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.487906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.487912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.500499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.500516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.500522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.512280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.512300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.512306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.897 [2024-07-16 00:40:25.525287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:11.897 [2024-07-16 00:40:25.525304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.897 [2024-07-16 00:40:25.525310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.159 [2024-07-16 00:40:25.537674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.159 [2024-07-16 00:40:25.537692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.159 [2024-07-16 00:40:25.537699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.159 [2024-07-16 00:40:25.549841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.159 [2024-07-16 00:40:25.549858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.549864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.561889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.561906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.561913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.572361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.572377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.572383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.587529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.587546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.587552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.597992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.598009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.598016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.610652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.610669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.610675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.623009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.623026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.623032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.635650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.635667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.635673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.648215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.648235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.648241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.660749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.660766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.660772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.671154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.671170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.671176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.683769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.683786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.683792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.696065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.696081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.696087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.708859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.708875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.708881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.720898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.720918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.720924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.733426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.733443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.733449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.746001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.746017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.746024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.757899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.757916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.757922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.768841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.768857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.768863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.160 [2024-07-16 00:40:25.781457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.160 [2024-07-16 00:40:25.781474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.160 [2024-07-16 00:40:25.781480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.422 [2024-07-16 00:40:25.794313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.422 [2024-07-16 00:40:25.794330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.422 [2024-07-16 00:40:25.794336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.422 [2024-07-16 00:40:25.806921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.422 [2024-07-16 00:40:25.806937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.422 [2024-07-16 00:40:25.806943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.422 [2024-07-16 00:40:25.818501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.422 [2024-07-16 00:40:25.818518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.422 [2024-07-16 00:40:25.818524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.831126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.831143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.831149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.842741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.842758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.842764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.856615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.856632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.856638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.867794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.867810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.867817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.880060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.880077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.880083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.892965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.892982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.892989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.905731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.905748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.905754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.917015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.917031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.917038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.930719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.930735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.930747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.940977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.940994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.941000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.953838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.953861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.965444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.965460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.965466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.978968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.978985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.978991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:25.990513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:25.990530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:25.990536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:26.003584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:26.003600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:26.003606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:26.014207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:26.014223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:26.014232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:26.026435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:26.026452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:26.026458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:26.038891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:26.038910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:26.038916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.423 [2024-07-16 00:40:26.051399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.423 [2024-07-16 00:40:26.051415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.423 [2024-07-16 00:40:26.051421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.064235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.064253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.064259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.077770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.077787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.077794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.087446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.087469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.087476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.101027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.101044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.101050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.114698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.114715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.114722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.126746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.126763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.126769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.139821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.139838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.139844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.152339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.152355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.152361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.164784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.164801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.164807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.176141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.176157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.176163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.188791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.188807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.188814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.200753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.200770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.200776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.212722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.212739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.212745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.226033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.226049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.226055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.238876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.238893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.238899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.249565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.684 [2024-07-16 00:40:26.249581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.684 [2024-07-16 00:40:26.249590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.684 [2024-07-16 00:40:26.262271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.685 [2024-07-16 00:40:26.262296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.685 [2024-07-16 00:40:26.262302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.685 [2024-07-16 00:40:26.274048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.685 [2024-07-16 00:40:26.274065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.685 [2024-07-16 00:40:26.274071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.685 [2024-07-16 00:40:26.287870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.685 [2024-07-16 00:40:26.287888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.685 [2024-07-16 00:40:26.287894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.685 [2024-07-16 00:40:26.298624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.685 [2024-07-16 00:40:26.298641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.685 [2024-07-16 00:40:26.298647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.685 [2024-07-16 00:40:26.312244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.685 [2024-07-16 00:40:26.312260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.685 [2024-07-16 00:40:26.312266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.324221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.324241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.324247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.336409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.336425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.336431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.348459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.348476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.348482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.361339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.361355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.361362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.371702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.371718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.371724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.384978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.384994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.385000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.396904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.396920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.396926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.410174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.410190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.410196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.422124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.422140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.422146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.433998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.434015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.434022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.446400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.446416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.446422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.458669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.458685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.458694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.470667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.470684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.470690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.482786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.482803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.482809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.493641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.493657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.493663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.507822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.507837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.507844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.519172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.519189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.519195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.531565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.531582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.531588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.543977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.543994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.544001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.555811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.946 [2024-07-16 00:40:26.555828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.946 [2024-07-16 00:40:26.555834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.946 [2024-07-16 00:40:26.568242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:12.947 [2024-07-16 00:40:26.568262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.947 [2024-07-16 00:40:26.568268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.580924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.580942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.580950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.594017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.594035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.594041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.606129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.606145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.606152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.618359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.618375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.618382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.631166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.631183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.631190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.643655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.643671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.643678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.654171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.654187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.654193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.667916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.667933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.667939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.681077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.681094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.681100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.693524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.693541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.693547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.705746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.705763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.705769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.717157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.717174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.717180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.729859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.729877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.729883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.742209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.742225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.742235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.753481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.753498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.753504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.767025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.767042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.767048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.779245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.779261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.779271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.791934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.791951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.791957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.802398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.802415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.802421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.815112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.815129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.815135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.208 [2024-07-16 00:40:26.827188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.208 [2024-07-16 00:40:26.827205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.208 [2024-07-16 00:40:26.827211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.839589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.839606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.839613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.852632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.852649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.852654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.864374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.864391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.864397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.876159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.876175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.876182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.888180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.888197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.888203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.901569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.901586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.901592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.912998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.913014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.913020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.926960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.926977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.926983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.937344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.937361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.937367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.950546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.950563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.950569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.960829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.960846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.960852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.973866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.973882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.973889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.986839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.986856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.986866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:26.999708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:26.999725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:26.999731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:27.011916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:27.011934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:27.011940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:27.024028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:27.024044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:27.024050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:27.036339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:27.036356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.470 [2024-07-16 00:40:27.036362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.470 [2024-07-16 00:40:27.048289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.470 [2024-07-16 00:40:27.048306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.471 [2024-07-16 00:40:27.048312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.471 [2024-07-16 00:40:27.060958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.471 [2024-07-16 00:40:27.060975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.471 [2024-07-16 00:40:27.060981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.471 [2024-07-16 00:40:27.073453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.471 [2024-07-16 00:40:27.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.471 [2024-07-16 00:40:27.073475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.471 [2024-07-16 00:40:27.085830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.471 [2024-07-16 00:40:27.085847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.471 [2024-07-16 00:40:27.085853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.471 [2024-07-16 00:40:27.098602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.471 [2024-07-16 00:40:27.098621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.471 [2024-07-16 00:40:27.098627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.110912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.110929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.110936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.123753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.123770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.123776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.133434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.133450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.133457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.146891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.146908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.146914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.159331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.159348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.159354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.171367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.171383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.171389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.184959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.184975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.184981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.197267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.197284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.197290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.208042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.208058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.208064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.220763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.220779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.220786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.233168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.233185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.233191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 [2024-07-16 00:40:27.244921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21659f0) 00:29:13.731 [2024-07-16 00:40:27.244937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.731 [2024-07-16 00:40:27.244943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.731 00:29:13.731 Latency(us) 00:29:13.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.731 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:13.731 nvme0n1 : 2.00 20678.75 80.78 0.00 0.00 6182.04 2075.31 16056.32 00:29:13.731 =================================================================================================================== 00:29:13.731 Total : 20678.75 80.78 0.00 0.00 6182.04 2075.31 16056.32 00:29:13.731 0 00:29:13.731 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.731 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.731 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.731 | .driver_specific 00:29:13.731 | .nvme_error 00:29:13.731 | .status_code 00:29:13.731 | .command_transient_transport_error' 00:29:13.731 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1276723 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1276723 ']' 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1276723 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1276723 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1276723' 00:29:13.991 killing process with pid 1276723 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1276723 00:29:13.991 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.991 00:29:13.991 Latency(us) 00:29:13.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.991 =================================================================================================================== 00:29:13.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1276723 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1277429 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1277429 /var/tmp/bperf.sock 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1277429 ']' 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.991 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.252 [2024-07-16 00:40:27.649003] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:14.252 [2024-07-16 00:40:27.649061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277429 ] 00:29:14.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.252 Zero copy mechanism will not be used. 00:29:14.252 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.252 [2024-07-16 00:40:27.730524] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.252 [2024-07-16 00:40:27.784011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.821 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.821 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:14.821 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.821 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.083 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:15.083 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.083 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.083 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.083 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.083 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.343 nvme0n1 00:29:15.343 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:15.344 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.344 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.344 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.344 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:15.344 00:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.604 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.604 Zero copy mechanism will not be used. 00:29:15.604 Running I/O for 2 seconds... 00:29:15.604 [2024-07-16 00:40:29.022949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.022981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.022989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.032660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.032680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.032687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.042968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.042987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.042994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.053248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.053267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.053273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.065160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.065178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.065184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.076523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.076552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.087978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.087995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.088002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.099192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.099209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.099216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.109130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.109148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.109154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.120238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.120255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.120262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.130656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.130674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.130680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.141604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.141622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.141629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.153774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.153798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.167173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.167191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.167197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.180752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.180769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.180776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.194202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.194219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.194226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.207919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.207936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.604 [2024-07-16 00:40:29.207942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.604 [2024-07-16 00:40:29.222177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.604 [2024-07-16 00:40:29.222194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.605 [2024-07-16 00:40:29.222201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.235900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.235918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.235924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.248651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.248667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.248673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.263066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.263083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.263090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.276298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.276315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.276321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.290617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.290635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.290644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.303304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.303322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.303328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.316863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.316880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.316886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.327946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.865 [2024-07-16 00:40:29.327963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.865 [2024-07-16 00:40:29.327969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.865 [2024-07-16 00:40:29.338348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.338365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.338371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.348428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.348445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.348452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.360854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.360871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.360877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.370490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.370507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.370513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.380626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.380643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.380649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.392131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.392149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.392155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.404141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.404158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.404164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.413429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.413447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.413454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.422227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.422248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.422254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.433444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.433461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.443628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.443645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.443651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.455416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.455433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.455439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.467969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.467987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.467993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.476425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.476441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.476450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.866 [2024-07-16 00:40:29.486442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:15.866 [2024-07-16 00:40:29.486459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.866 [2024-07-16 00:40:29.486465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.498221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.498242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.498249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.508296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.508313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.508319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.517208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.517226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.517238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.526856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.526873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.526879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.538282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.538299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.538305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.549549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.549566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.549572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.561593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.561610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.561616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.571431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.571452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.571458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.581482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.581499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.581505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.593740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.593757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.593763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.603929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.603946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.603952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.613657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.613674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.613680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.623565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.623582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.623588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.635711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.635728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.635734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.645892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.645909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.127 [2024-07-16 00:40:29.645915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.127 [2024-07-16 00:40:29.656268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.127 [2024-07-16 00:40:29.656286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.656292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.668750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.668767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.679581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.679599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.679605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.689862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.689881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.689888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.702525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.702543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.702549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.712735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.712752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.712758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.723072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.723089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.723096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.734134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.734152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.734158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.745859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.745876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.745883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.128 [2024-07-16 00:40:29.757278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.128 [2024-07-16 00:40:29.757295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.128 [2024-07-16 00:40:29.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.767968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.767985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.767991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.779013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.779031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.779037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.792541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.792558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.792564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.803078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.814497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.814515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.814521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.824254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.824272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.824278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.834854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.834871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.834877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.847114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.847131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.847138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.856723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.856740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.856746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.388 [2024-07-16 00:40:29.868628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.388 [2024-07-16 00:40:29.868645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.388 [2024-07-16 00:40:29.868651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.879526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.879543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.879549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.889355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.889372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.889378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.901358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.901375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.901382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.912057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.912074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.912080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.923469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.923487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.923493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.934867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.934885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.934891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.945855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.945872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.945882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.956722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.956740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.956747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.967998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.968016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.968023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.978808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.978825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.978831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:29.991209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:29.991227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:29.991239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:30.001869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:30.001886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:30.001893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.389 [2024-07-16 00:40:30.013875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.389 [2024-07-16 00:40:30.013893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-16 00:40:30.013899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.024868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.024886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.024893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.034444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.034462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.034469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.047038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.047059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.047065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.057796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.057813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.057819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.068507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.068525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.068531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.077549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.077567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.077573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.088651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.088669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.088676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.099553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.099570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.099577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.110705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.110724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.110731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.121205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.121223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.121234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.132853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.132871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.132877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.145892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.145910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.145916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.158207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.158225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.158237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.171551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.171568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.171574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.184370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.184387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.650 [2024-07-16 00:40:30.184394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.650 [2024-07-16 00:40:30.195593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.650 [2024-07-16 00:40:30.195610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.195616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.206708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.206726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.206732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.218301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.218319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.218325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.225294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.225313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.225320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.237431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.237452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.237458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.248624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.248642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.248648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.259973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.259991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.259997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.270507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.270525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.270531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.651 [2024-07-16 00:40:30.280348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.651 [2024-07-16 00:40:30.280365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.651 [2024-07-16 00:40:30.280371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.290706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.290723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.290729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.303631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.303649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.303655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.314530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.314548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.314554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.325151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.325169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.325176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.336833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.336851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.336857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.349018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.349035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.349042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.357804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.357823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.357830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.369788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.369807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.369813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.380345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.380364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.380371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.392749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.392768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.392774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.404323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.404342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.404348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.414963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.414982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.414990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.426163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.426182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.426192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.437115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.437133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.437140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.447987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.448006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.448012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.459115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.459133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.459140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.470206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.470225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.470236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.482464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.482482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.482489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.494604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.912 [2024-07-16 00:40:30.494622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.912 [2024-07-16 00:40:30.494628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.912 [2024-07-16 00:40:30.505902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.913 [2024-07-16 00:40:30.505919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.913 [2024-07-16 00:40:30.505925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.913 [2024-07-16 00:40:30.516938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.913 [2024-07-16 00:40:30.516957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.913 [2024-07-16 00:40:30.516963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.913 [2024-07-16 00:40:30.529938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:16.913 [2024-07-16 00:40:30.529959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.913 [2024-07-16 00:40:30.529965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.543815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.543834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.543839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.558374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.558392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.558398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.571125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.571144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.571150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.583221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.583244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.583250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.597500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.597519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.597525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.610913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.610932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.610938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.622453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.622471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.622477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.633941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.633959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.633965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.644289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.644308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.644314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.655741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.655760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.655766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.665266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.665284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.665290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.675698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.675717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.675723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.687193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.687211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.687217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.698063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.698082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.698088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.709003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.709021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.709027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.720558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.720576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.720583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.731525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.731543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.731552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.742396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.742415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.742421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.752506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.752525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.752531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.763076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.763094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.763101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.774822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.774841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.774847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.786988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.787007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.787013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.174 [2024-07-16 00:40:30.798838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.174 [2024-07-16 00:40:30.798857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.174 [2024-07-16 00:40:30.798863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.435 [2024-07-16 00:40:30.812180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.435 [2024-07-16 00:40:30.812199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.435 [2024-07-16 00:40:30.812205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.435 [2024-07-16 00:40:30.824296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.435 [2024-07-16 00:40:30.824314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.435 [2024-07-16 00:40:30.824320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.435 [2024-07-16 00:40:30.836026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.435 [2024-07-16 00:40:30.836047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.435 [2024-07-16 00:40:30.836053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.435 [2024-07-16 00:40:30.846045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.435 [2024-07-16 00:40:30.846063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.435 [2024-07-16 00:40:30.846069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.435 [2024-07-16 00:40:30.856829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.435 [2024-07-16 00:40:30.856847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.435 [2024-07-16 00:40:30.856853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.435 [2024-07-16 00:40:30.867451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.435 [2024-07-16 00:40:30.867470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.435 [2024-07-16 00:40:30.867476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.435 [2024-07-16 00:40:30.880356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.880374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.880380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.891358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.891376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.891382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.901762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.901781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.901787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.911876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.911894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.911901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.922457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.922475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.922484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.932260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.932277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.932283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.944585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.944603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.944609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.957938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.957957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.957963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.971868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.971886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.971892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.985555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.985573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.985579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:30.997500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:30.997519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:30.997525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.436 [2024-07-16 00:40:31.008366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a7430) 00:29:17.436 [2024-07-16 00:40:31.008384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.436 [2024-07-16 00:40:31.008390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.436 00:29:17.436 Latency(us) 00:29:17.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.436 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:17.436 nvme0n1 : 2.00 2738.36 342.30 0.00 0.00 5839.49 1542.83 15182.51 00:29:17.436 =================================================================================================================== 00:29:17.436 Total : 2738.36 342.30 0.00 0.00 5839.49 1542.83 15182.51 00:29:17.436 0 00:29:17.436 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:17.436 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:17.436 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:17.436 | .driver_specific 00:29:17.436 | .nvme_error 00:29:17.436 | .status_code 00:29:17.436 | .command_transient_transport_error' 00:29:17.436 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 176 > 0 )) 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1277429 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1277429 ']' 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1277429 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1277429 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1277429' 00:29:17.696 killing process with pid 1277429 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1277429 00:29:17.696 Received shutdown signal, test time was about 2.000000 seconds 00:29:17.696 00:29:17.696 Latency(us) 00:29:17.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.696 =================================================================================================================== 00:29:17.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.696 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1277429 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1278111 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1278111 /var/tmp/bperf.sock 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1278111 ']' 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.957 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.957 [2024-07-16 00:40:31.418852] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:17.957 [2024-07-16 00:40:31.418909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278111 ] 00:29:17.957 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.957 [2024-07-16 00:40:31.498109] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.957 [2024-07-16 00:40:31.551600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.898 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.159 nvme0n1 00:29:19.159 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:19.159 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.159 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.159 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:19.159 00:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.159 Running I/O for 2 seconds... 00:29:19.159 [2024-07-16 00:40:32.730260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.159 [2024-07-16 00:40:32.731843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.159 [2024-07-16 00:40:32.731868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:19.159 [2024-07-16 00:40:32.741130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e0a68 00:29:19.159 [2024-07-16 00:40:32.742335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.159 [2024-07-16 00:40:32.742353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:19.159 [2024-07-16 00:40:32.753160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ea248 00:29:19.159 [2024-07-16 00:40:32.754318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.159 [2024-07-16 00:40:32.754338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.159 [2024-07-16 00:40:32.764853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f6cc8 00:29:19.159 [2024-07-16 00:40:32.765823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.159 [2024-07-16 00:40:32.765839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.159 [2024-07-16 00:40:32.776659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f2948 00:29:19.159 [2024-07-16 00:40:32.777607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.159 [2024-07-16 00:40:32.777622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.159 [2024-07-16 00:40:32.788450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fd208 00:29:19.159 [2024-07-16 00:40:32.789416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.159 [2024-07-16 00:40:32.789431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.800387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f0788 00:29:19.421 [2024-07-16 00:40:32.801568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.801583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.812121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e3d08 00:29:19.421 [2024-07-16 00:40:32.813326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.813341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.823909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fc128 00:29:19.421 [2024-07-16 00:40:32.825070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.825086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.835545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e5ec8 00:29:19.421 [2024-07-16 00:40:32.836529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.836544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.847328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f6cc8 00:29:19.421 [2024-07-16 00:40:32.848301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.848317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.859090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e9168 00:29:19.421 [2024-07-16 00:40:32.860068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.860084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.870840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e5ec8 00:29:19.421 [2024-07-16 00:40:32.871809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.871824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.882762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190feb58 00:29:19.421 [2024-07-16 00:40:32.883968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.883983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.893742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ee5c8 00:29:19.421 [2024-07-16 00:40:32.894822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.894837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.906224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f7da8 00:29:19.421 [2024-07-16 00:40:32.907198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.907214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.918130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fd208 00:29:19.421 [2024-07-16 00:40:32.919338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.919354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.928988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fb048 00:29:19.421 [2024-07-16 00:40:32.930051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.930066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.943029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190feb58 00:29:19.421 [2024-07-16 00:40:32.944742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.944757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.953434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f3a28 00:29:19.421 [2024-07-16 00:40:32.954633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.954648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.965164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e5ec8 00:29:19.421 [2024-07-16 00:40:32.966342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.966357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.976732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fd208 00:29:19.421 [2024-07-16 00:40:32.977699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.977715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:32.988494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ec408 00:29:19.421 [2024-07-16 00:40:32.989443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:32.989459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:33.000221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e1b48 00:29:19.421 [2024-07-16 00:40:33.001190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:33.001205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:33.013526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fd208 00:29:19.421 [2024-07-16 00:40:33.015249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:33.015264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:33.022981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e1b48 00:29:19.421 [2024-07-16 00:40:33.024057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:33.024071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:33.035484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ee5c8 00:29:19.421 [2024-07-16 00:40:33.036443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:33.036459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.421 [2024-07-16 00:40:33.047386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e9168 00:29:19.421 [2024-07-16 00:40:33.048587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.421 [2024-07-16 00:40:33.048604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.059128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f0788 00:29:19.682 [2024-07-16 00:40:33.060335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.060353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.070864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ed4e8 00:29:19.682 [2024-07-16 00:40:33.072054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.072070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.082628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e9168 00:29:19.682 [2024-07-16 00:40:33.083823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.083838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.094368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f0788 00:29:19.682 [2024-07-16 00:40:33.095552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.095566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.106102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190feb58 00:29:19.682 [2024-07-16 00:40:33.107068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.107084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.119382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fd208 00:29:19.682 [2024-07-16 00:40:33.121088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.121103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.129731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fb048 00:29:19.682 [2024-07-16 00:40:33.130928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.130943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.141693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f0788 00:29:19.682 [2024-07-16 00:40:33.142896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.142911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.153455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e9168 00:29:19.682 [2024-07-16 00:40:33.154657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.154672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.164308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190feb58 00:29:19.682 [2024-07-16 00:40:33.165389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.165407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.176957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f7da8 00:29:19.682 [2024-07-16 00:40:33.178168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.178183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.188707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fb048 00:29:19.682 [2024-07-16 00:40:33.189875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.189890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.200303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f1868 00:29:19.682 [2024-07-16 00:40:33.201271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.201286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.212046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f5be8 00:29:19.682 [2024-07-16 00:40:33.212978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.212994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.223796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e6fa8 00:29:19.682 [2024-07-16 00:40:33.224760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.224775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.235537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f1868 00:29:19.682 [2024-07-16 00:40:33.236466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.236481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.247289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f5be8 00:29:19.682 [2024-07-16 00:40:33.248252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.248267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.259150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e3d08 00:29:19.682 [2024-07-16 00:40:33.260339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.260355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.270919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ee5c8 00:29:19.682 [2024-07-16 00:40:33.272124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.682 [2024-07-16 00:40:33.272140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.682 [2024-07-16 00:40:33.282672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f6cc8 00:29:19.683 [2024-07-16 00:40:33.283879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.683 [2024-07-16 00:40:33.283894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.683 [2024-07-16 00:40:33.294415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e3d08 00:29:19.683 [2024-07-16 00:40:33.295618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.683 [2024-07-16 00:40:33.295633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.683 [2024-07-16 00:40:33.306169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ee5c8 00:29:19.683 [2024-07-16 00:40:33.307341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.683 [2024-07-16 00:40:33.307356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.317765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e0a68 00:29:19.943 [2024-07-16 00:40:33.318735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.318750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.329512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e6fa8 00:29:19.943 [2024-07-16 00:40:33.330481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.330497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.341297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fe2e8 00:29:19.943 [2024-07-16 00:40:33.342262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.342277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.353040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e0a68 00:29:19.943 [2024-07-16 00:40:33.354004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.354020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.364790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e6fa8 00:29:19.943 [2024-07-16 00:40:33.365758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.365773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.376699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190eb328 00:29:19.943 [2024-07-16 00:40:33.377902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.377918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.387546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f3a28 00:29:19.943 [2024-07-16 00:40:33.388629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.388644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.400072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190fd208 00:29:19.943 [2024-07-16 00:40:33.401041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.401056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.413366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e5ec8 00:29:19.943 [2024-07-16 00:40:33.415086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.415101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.423727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190ef6a8 00:29:19.943 [2024-07-16 00:40:33.424890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.424905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.436878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e0a68 00:29:19.943 [2024-07-16 00:40:33.438604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.438620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.447213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190f7da8 00:29:19.943 [2024-07-16 00:40:33.448370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.448385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.459063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.460255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.460270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.470826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.472033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.472052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.482557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.483757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.483773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.494324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.495514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.495529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.506063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.507264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.507279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.517792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.518985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.519000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.529547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.530745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.530760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.541287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.542447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.542461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.553002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.554202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.554217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:19.943 [2024-07-16 00:40:33.564747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:19.943 [2024-07-16 00:40:33.565940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.943 [2024-07-16 00:40:33.565956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.576481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.577684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.577700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.588264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.589471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.589486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.600018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.601210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.601225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.611767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.612965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.612980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.623502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.624701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.624716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.635248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.636456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.636471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.646992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.648191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.648206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.658768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.659962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.659977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.670530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.671730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.671745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.682302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.683505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.683520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.694076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.695280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.695295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.705817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.707016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.707031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.717595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.718796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.718812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.729352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.730559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.730575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.741087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.742282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.742297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.752891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.754089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.754105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.764617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.204 [2024-07-16 00:40:33.765817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.204 [2024-07-16 00:40:33.765833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.204 [2024-07-16 00:40:33.776377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.205 [2024-07-16 00:40:33.777536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.205 [2024-07-16 00:40:33.777554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.205 [2024-07-16 00:40:33.788146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.205 [2024-07-16 00:40:33.789341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.205 [2024-07-16 00:40:33.789356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.205 [2024-07-16 00:40:33.799903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.205 [2024-07-16 00:40:33.801101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.205 [2024-07-16 00:40:33.801116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.205 [2024-07-16 00:40:33.811654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.205 [2024-07-16 00:40:33.812850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.205 [2024-07-16 00:40:33.812865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.205 [2024-07-16 00:40:33.823419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.205 [2024-07-16 00:40:33.824579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.205 [2024-07-16 00:40:33.824595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.205 [2024-07-16 00:40:33.835177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.836347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.836363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.846951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.848153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.848168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.858693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.859868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.859883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.870436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.871635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.871650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.882183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.883358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.883373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.893940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.895210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.895226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.905761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.906956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.906972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.917536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.918735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.918750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.929274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.930436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.930451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.941045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.942242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.942258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.952776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.953975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.953990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.964537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.965720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.965736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.976310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.977518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.466 [2024-07-16 00:40:33.977533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.466 [2024-07-16 00:40:33.988054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.466 [2024-07-16 00:40:33.989266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:33.989282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:33.999815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.001003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.001018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.011563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.012751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.012766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.023280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.024446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.024461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.035037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.036197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.036213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.046783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.047980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.047996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.058553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.059745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.059761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.070370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.071573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.071589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.082118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.083313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.083332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.467 [2024-07-16 00:40:34.093867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.467 [2024-07-16 00:40:34.095063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.467 [2024-07-16 00:40:34.095078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.728 [2024-07-16 00:40:34.105644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.728 [2024-07-16 00:40:34.106832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.728 [2024-07-16 00:40:34.106847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.728 [2024-07-16 00:40:34.117377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.728 [2024-07-16 00:40:34.118531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.728 [2024-07-16 00:40:34.118547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.728 [2024-07-16 00:40:34.129181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.728 [2024-07-16 00:40:34.130342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.728 [2024-07-16 00:40:34.130358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.728 [2024-07-16 00:40:34.141104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.728 [2024-07-16 00:40:34.142282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.728 [2024-07-16 00:40:34.142298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.728 [2024-07-16 00:40:34.152859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.728 [2024-07-16 00:40:34.154057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.154072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.164621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.165830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.165846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.176371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.177548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.177563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.188119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.189330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.189345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.199906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.201104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.201119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.211630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.212833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.212848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.223384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.224559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.224574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.235135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.236332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.236348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.246882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.248083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.248099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.258625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.259819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.259834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.270364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.271564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.271579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.282106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.283324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.283339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.293896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.295098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.295114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.305652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.306836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.306852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.317414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.318612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.318627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.329158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.330360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.330376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.340893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.342095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.342110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.729 [2024-07-16 00:40:34.352642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.729 [2024-07-16 00:40:34.353839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.729 [2024-07-16 00:40:34.353854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.990 [2024-07-16 00:40:34.364401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.990 [2024-07-16 00:40:34.365581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.990 [2024-07-16 00:40:34.365596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.990 [2024-07-16 00:40:34.376128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.990 [2024-07-16 00:40:34.377322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.990 [2024-07-16 00:40:34.377337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.990 [2024-07-16 00:40:34.387877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.990 [2024-07-16 00:40:34.389082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.990 [2024-07-16 00:40:34.389098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.990 [2024-07-16 00:40:34.399590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.990 [2024-07-16 00:40:34.400787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.990 [2024-07-16 00:40:34.400802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.990 [2024-07-16 00:40:34.411322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.990 [2024-07-16 00:40:34.412527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.990 [2024-07-16 00:40:34.412542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.990 [2024-07-16 00:40:34.423060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.990 [2024-07-16 00:40:34.424252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.990 [2024-07-16 00:40:34.424267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.434791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.435980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.435995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.446500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.447693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.447708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.458232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.459403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.459418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.469938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.471137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.471153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.481690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.482892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.482907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.493442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.494638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.494656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.505214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.506411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.506425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.516956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.518145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.518160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.528669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.529866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.529880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.540395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.541583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.541598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.552153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.553312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.553327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.563881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.565077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.565092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.575651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.576852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.576867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.587364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.588568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.588583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.599081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.600277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.600292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:20.991 [2024-07-16 00:40:34.610832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:20.991 [2024-07-16 00:40:34.612017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.991 [2024-07-16 00:40:34.612032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.251 [2024-07-16 00:40:34.622590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.251 [2024-07-16 00:40:34.623784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.251 [2024-07-16 00:40:34.623799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.251 [2024-07-16 00:40:34.634309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.251 [2024-07-16 00:40:34.635506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.251 [2024-07-16 00:40:34.635522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.251 [2024-07-16 00:40:34.646040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.251 [2024-07-16 00:40:34.647237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.251 [2024-07-16 00:40:34.647252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.251 [2024-07-16 00:40:34.657738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.251 [2024-07-16 00:40:34.658936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.251 [2024-07-16 00:40:34.658951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.251 [2024-07-16 00:40:34.669475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.252 [2024-07-16 00:40:34.670669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.252 [2024-07-16 00:40:34.670684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.252 [2024-07-16 00:40:34.681198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.252 [2024-07-16 00:40:34.682393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.252 [2024-07-16 00:40:34.682408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.252 [2024-07-16 00:40:34.692941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.252 [2024-07-16 00:40:34.694137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.252 [2024-07-16 00:40:34.694152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.252 [2024-07-16 00:40:34.704697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.252 [2024-07-16 00:40:34.705890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.252 [2024-07-16 00:40:34.705905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.252 [2024-07-16 00:40:34.716412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8faeb0) with pdu=0x2000190e8088 00:29:21.252 [2024-07-16 00:40:34.717607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.252 [2024-07-16 00:40:34.717622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:21.252 00:29:21.252 Latency(us) 00:29:21.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.252 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.252 nvme0n1 : 2.00 21676.55 84.67 0.00 0.00 5896.35 2184.53 13817.17 00:29:21.252 =================================================================================================================== 00:29:21.252 Total : 21676.55 84.67 0.00 0.00 5896.35 2184.53 13817.17 00:29:21.252 0 00:29:21.252 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:21.252 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:21.252 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:21.252 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:21.252 | .driver_specific 00:29:21.252 | .nvme_error 00:29:21.252 | .status_code 00:29:21.252 | .command_transient_transport_error' 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1278111 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1278111 ']' 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1278111 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1278111 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1278111' 00:29:21.512 killing process with pid 1278111 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1278111 00:29:21.512 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.512 00:29:21.512 Latency(us) 00:29:21.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.512 =================================================================================================================== 00:29:21.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.512 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1278111 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1278793 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1278793 /var/tmp/bperf.sock 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1278793 ']' 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:21.512 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.512 [2024-07-16 00:40:35.123525] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:21.512 [2024-07-16 00:40:35.123579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278793 ] 00:29:21.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.512 Zero copy mechanism will not be used. 00:29:21.773 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.773 [2024-07-16 00:40:35.204264] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.773 [2024-07-16 00:40:35.256622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.345 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:22.345 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:22.345 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.345 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.605 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:22.605 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.605 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.605 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.605 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.605 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.866 nvme0n1 00:29:22.866 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:22.866 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.866 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.866 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.866 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.866 00:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.866 Zero copy mechanism will not be used. 00:29:22.866 Running I/O for 2 seconds... 00:29:22.866 [2024-07-16 00:40:36.393092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.393464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.393494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.404801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.405155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.405173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.416024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.416379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.416397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.427851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.428180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.428197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.438044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.438376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.438394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.447932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.448269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.448287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.458287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.458642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.458659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.470157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.470389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.470407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.481849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.482180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.482197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.866 [2024-07-16 00:40:36.493380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:22.866 [2024-07-16 00:40:36.493732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.866 [2024-07-16 00:40:36.493749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.127 [2024-07-16 00:40:36.505940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.127 [2024-07-16 00:40:36.506282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.127 [2024-07-16 00:40:36.506299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.127 [2024-07-16 00:40:36.517134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.127 [2024-07-16 00:40:36.517467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.127 [2024-07-16 00:40:36.517484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.127 [2024-07-16 00:40:36.529544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.127 [2024-07-16 00:40:36.529884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.127 [2024-07-16 00:40:36.529901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.127 [2024-07-16 00:40:36.542385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.127 [2024-07-16 00:40:36.542739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.127 [2024-07-16 00:40:36.542756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.127 [2024-07-16 00:40:36.553405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.127 [2024-07-16 00:40:36.553776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.127 [2024-07-16 00:40:36.553793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.562384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.562739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.562755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.572859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.573171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.573188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.580281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.580509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.580526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.589537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.589858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.589875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.599333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.599560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.599577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.605963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.606064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.606079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.613699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.613990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.614007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.621552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.621766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.621782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.630631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.630988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.631005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.639628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.639927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.639946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.649737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.650159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.650176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.658521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.658877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.658893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.668396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.668774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.668790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.676720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.677062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.677078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.684998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.685353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.685369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.692705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.692955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.692972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.701205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.701538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.701554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.708618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.708818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.708835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.715874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.716079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.716095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.725387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.725717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.725734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.733718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.734133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.734149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.740533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.740846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.740862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.748105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.128 [2024-07-16 00:40:36.748388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.128 [2024-07-16 00:40:36.748405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.128 [2024-07-16 00:40:36.758186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.758499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.758516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.766168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.766429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.766446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.775576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.775934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.775950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.784537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.784944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.784960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.792354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.792806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.792822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.801888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.802118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.802134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.809181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.809389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.809406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.816433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.816635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.816651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.825625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.825996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.826012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.830702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.830905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.830921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.837225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.837518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.837535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.844732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.844989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.845006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.852761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.852993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.853012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.860522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.860779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.860796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.870449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.870745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.870761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.879735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.879951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.879967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.888878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.889250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.889266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.899276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.899626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.899643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.909298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.909503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.909519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.917403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.917773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.917789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.928481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.928789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.928805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.938265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.938597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.938614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.949201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.949481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.949497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.959064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.959431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.959447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.968786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.969120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.969136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.975107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.975319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.975335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.983436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.390 [2024-07-16 00:40:36.983641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.390 [2024-07-16 00:40:36.983657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.390 [2024-07-16 00:40:36.990940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.391 [2024-07-16 00:40:36.991378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.391 [2024-07-16 00:40:36.991394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.391 [2024-07-16 00:40:36.997474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.391 [2024-07-16 00:40:36.997778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.391 [2024-07-16 00:40:36.997795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.391 [2024-07-16 00:40:37.006108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.391 [2024-07-16 00:40:37.006490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.391 [2024-07-16 00:40:37.006506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.391 [2024-07-16 00:40:37.013846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.391 [2024-07-16 00:40:37.014167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.391 [2024-07-16 00:40:37.014184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.021861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.022062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.022079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.030091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.030397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.030414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.036673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.036897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.036914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.044503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.044704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.044720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.049598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.049942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.049958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.059627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.059878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.059894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.066338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.066677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.066694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.073367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.073571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.073590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.080570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.080771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.080787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.087936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.088138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.088155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.096721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.097110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.097126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.104603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.104804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.104820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.112615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.112863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.112879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.121738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.122071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.122087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.130131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.130443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.652 [2024-07-16 00:40:37.130460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.652 [2024-07-16 00:40:37.138975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.652 [2024-07-16 00:40:37.139356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.139373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.147668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.147875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.147891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.155028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.155336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.155352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.163500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.163869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.163886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.172272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.172518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.172535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.180898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.181325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.181341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.188699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.189063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.189080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.194849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.195051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.195068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.199946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.200265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.200282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.207559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.207758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.207774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.213871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.214072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.214089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.221413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.221796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.221812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.227811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.228069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.228085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.234327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.234653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.234670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.241305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.241620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.241637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.249846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.250156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.250172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.256767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.257138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.257154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.266508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.266708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.266724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.273435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.273636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.273655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.653 [2024-07-16 00:40:37.281203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.653 [2024-07-16 00:40:37.281508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.653 [2024-07-16 00:40:37.281524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.287600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.287801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.287817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.295419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.295620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.295637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.302780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.302981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.302997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.310318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.310521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.310538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.316826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.317010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.317027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.325260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.325517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.325533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.333627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.333804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.333820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.342804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.342974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.342990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.349461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.349794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.349810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.356708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.356901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.356918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.363827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.364007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.364023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.370159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.370446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.370463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.375383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.375706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.375723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.381188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.381374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.381390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.387413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.387748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.387765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.394636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.915 [2024-07-16 00:40:37.394801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.915 [2024-07-16 00:40:37.394820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.915 [2024-07-16 00:40:37.400328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.400495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.400511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.405289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.405459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.405475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.413584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.413885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.413902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.420805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.420972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.420987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.429454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.429622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.429637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.435958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.436284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.436301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.443037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.443226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.443247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.450702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.451091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.451108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.458252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.458607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.458623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.466118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.466291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.466307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.473766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.473984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.474001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.480021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.480188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.480203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.485888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.486169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.486186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.492889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.493099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.493115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.501464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.501823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.501840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.511341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.511552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.511568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.521043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.521183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.521198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.530180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.530334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.916 [2024-07-16 00:40:37.540656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:23.916 [2024-07-16 00:40:37.540957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.916 [2024-07-16 00:40:37.540974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.177 [2024-07-16 00:40:37.550883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.177 [2024-07-16 00:40:37.551196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.177 [2024-07-16 00:40:37.551212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.177 [2024-07-16 00:40:37.561134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.177 [2024-07-16 00:40:37.561432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.177 [2024-07-16 00:40:37.561448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.177 [2024-07-16 00:40:37.571301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.177 [2024-07-16 00:40:37.571604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.177 [2024-07-16 00:40:37.571620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.177 [2024-07-16 00:40:37.581401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.177 [2024-07-16 00:40:37.581670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.177 [2024-07-16 00:40:37.581686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.177 [2024-07-16 00:40:37.592028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.177 [2024-07-16 00:40:37.592215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.177 [2024-07-16 00:40:37.592235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.602151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.602329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.602345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.612020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.612395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.612415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.622179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.622358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.622374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.629441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.629637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.629654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.636867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.637264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.637280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.642749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.643033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.643050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.647718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.648004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.648021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.654221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.654570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.654586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.661319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.661612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.661628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.668786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.669080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.669096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.676340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.676511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.676526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.683249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.683535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.683551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.689836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.690017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.690034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.699060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.699392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.699409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.708023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.708193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.708209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.717111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.717419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.717436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.727173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.727367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.727383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.734560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.734837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.734854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.744950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.745145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.745162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.754007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.754368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.754385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.763593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.763841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.763857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.773309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.773606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.773622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.783258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.783657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.783673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.792720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.792992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.793008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.178 [2024-07-16 00:40:37.802611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.178 [2024-07-16 00:40:37.802788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.178 [2024-07-16 00:40:37.802805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.810817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.810989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.811006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.817708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.817922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.817939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.825685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.825981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.826000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.832330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.832500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.832516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.838776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.839105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.839122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.844305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.844629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.844645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.851611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.851779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.851795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.857399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.857717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.857733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.864096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.864415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.864432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.871952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.872152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.872169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.879738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.879928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.879944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.886641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.886864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.886880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.894382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.894626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.894643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.900486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.900759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.440 [2024-07-16 00:40:37.900776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.440 [2024-07-16 00:40:37.907980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.440 [2024-07-16 00:40:37.908312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.908328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.916794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.916962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.916977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.924459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.924627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.924643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.932590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.932757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.932774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.940168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.940673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.940689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.950315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.950485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.950501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.960510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.960755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.960771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.967247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.967420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.967437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.974887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.975058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.975074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.981621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.981803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.981819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.986695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.986943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.986960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:37.992570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:37.992751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:37.992766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.000437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.000627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.000643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.006162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.006336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.006353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.011212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.011393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.011411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.017557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.017919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.017935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.025265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.025630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.025647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.036159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.036463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.036479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.047562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.047931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.047947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.441 [2024-07-16 00:40:38.059594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.441 [2024-07-16 00:40:38.059930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.441 [2024-07-16 00:40:38.059947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.702 [2024-07-16 00:40:38.071550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.702 [2024-07-16 00:40:38.071903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.702 [2024-07-16 00:40:38.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.702 [2024-07-16 00:40:38.083395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.702 [2024-07-16 00:40:38.083986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.702 [2024-07-16 00:40:38.084003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.702 [2024-07-16 00:40:38.094942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.702 [2024-07-16 00:40:38.095458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.702 [2024-07-16 00:40:38.095474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.702 [2024-07-16 00:40:38.108012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.702 [2024-07-16 00:40:38.108336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.702 [2024-07-16 00:40:38.108352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.702 [2024-07-16 00:40:38.120280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.702 [2024-07-16 00:40:38.120724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.702 [2024-07-16 00:40:38.120741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.702 [2024-07-16 00:40:38.131581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.702 [2024-07-16 00:40:38.132112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.702 [2024-07-16 00:40:38.132129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.702 [2024-07-16 00:40:38.140987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.702 [2024-07-16 00:40:38.141221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.702 [2024-07-16 00:40:38.141244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.151943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.152180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.152197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.162793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.163018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.163034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.174130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.174449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.174465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.186186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.186581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.186597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.196354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.196794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.196810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.206593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.206747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.206763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.217576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.217969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.217985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.229001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.229267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.229284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.239993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.240277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.240293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.251250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.251467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.251483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.262384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.262545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.262561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.272526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.272827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.272844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.284362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.284633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.284649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.294076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.294377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.294396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.303379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.303580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.303597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.313203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.313548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.313564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.323032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.323201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.323218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.703 [2024-07-16 00:40:38.332416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.703 [2024-07-16 00:40:38.332578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.703 [2024-07-16 00:40:38.332594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.963 [2024-07-16 00:40:38.341596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.963 [2024-07-16 00:40:38.341867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.963 [2024-07-16 00:40:38.341883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.963 [2024-07-16 00:40:38.351040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.963 [2024-07-16 00:40:38.351411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.963 [2024-07-16 00:40:38.351427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.963 [2024-07-16 00:40:38.360620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.963 [2024-07-16 00:40:38.360904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.963 [2024-07-16 00:40:38.360920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.963 [2024-07-16 00:40:38.370751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8fb050) with pdu=0x2000190fef90 00:29:24.963 [2024-07-16 00:40:38.371032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.963 [2024-07-16 00:40:38.371048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.963 00:29:24.963 Latency(us) 00:29:24.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.963 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:24.963 nvme0n1 : 2.00 3600.37 450.05 0.00 0.00 4436.99 1979.73 17694.72 00:29:24.963 =================================================================================================================== 00:29:24.963 Total : 3600.37 450.05 0.00 0.00 4436.99 1979.73 17694.72 00:29:24.963 0 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.963 | .driver_specific 00:29:24.963 | .nvme_error 00:29:24.963 | .status_code 00:29:24.963 | .command_transient_transport_error' 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1278793 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1278793 ']' 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1278793 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.963 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1278793 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1278793' 00:29:25.223 killing process with pid 1278793 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1278793 00:29:25.223 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.223 00:29:25.223 Latency(us) 00:29:25.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.223 =================================================================================================================== 00:29:25.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1278793 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1276381 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1276381 ']' 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1276381 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1276381 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1276381' 00:29:25.223 killing process with pid 1276381 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1276381 00:29:25.223 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1276381 00:29:25.484 00:29:25.484 real 0m16.098s 00:29:25.484 user 0m31.515s 00:29:25.484 sys 0m3.302s 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.484 ************************************ 00:29:25.484 END TEST nvmf_digest_error 00:29:25.484 ************************************ 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:25.484 00:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:25.484 rmmod nvme_tcp 00:29:25.484 rmmod nvme_fabrics 00:29:25.484 rmmod nvme_keyring 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1276381 ']' 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1276381 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1276381 ']' 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1276381 00:29:25.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1276381) - No such process 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1276381 is not found' 00:29:25.484 Process with pid 1276381 is not found 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.484 00:40:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.028 00:40:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:28.028 00:29:28.028 real 0m42.590s 00:29:28.028 user 1m5.126s 00:29:28.028 sys 0m12.931s 00:29:28.028 00:40:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:28.028 00:40:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:28.028 ************************************ 00:29:28.028 END TEST nvmf_digest 00:29:28.028 ************************************ 00:29:28.028 00:40:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:28.028 00:40:41 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:28.028 00:40:41 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:28.028 00:40:41 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:28.028 00:40:41 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:28.028 00:40:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:28.028 00:40:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.028 00:40:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.028 ************************************ 00:29:28.028 START TEST nvmf_bdevperf 00:29:28.028 ************************************ 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:28.028 * Looking for test storage... 00:29:28.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:28.028 00:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:28.029 00:40:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:36.169 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:36.170 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:36.170 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:36.170 Found net devices under 0000:31:00.0: cvl_0_0 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:36.170 Found net devices under 0000:31:00.1: cvl_0_1 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:36.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.739 ms 00:29:36.170 00:29:36.170 --- 10.0.0.2 ping statistics --- 00:29:36.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.170 rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:29:36.170 00:29:36.170 --- 10.0.0.1 ping statistics --- 00:29:36.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.170 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1284164 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1284164 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1284164 ']' 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.170 00:40:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.170 [2024-07-16 00:40:49.668742] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:36.170 [2024-07-16 00:40:49.668808] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.170 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.170 [2024-07-16 00:40:49.762543] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:36.431 [2024-07-16 00:40:49.857086] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.431 [2024-07-16 00:40:49.857152] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.431 [2024-07-16 00:40:49.857160] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.431 [2024-07-16 00:40:49.857168] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.431 [2024-07-16 00:40:49.857174] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.431 [2024-07-16 00:40:49.857320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.431 [2024-07-16 00:40:49.857867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.431 [2024-07-16 00:40:49.857866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.002 [2024-07-16 00:40:50.483281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.002 Malloc0 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.002 [2024-07-16 00:40:50.554710] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:37.002 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.003 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.003 { 00:29:37.003 "params": { 00:29:37.003 "name": "Nvme$subsystem", 00:29:37.003 "trtype": "$TEST_TRANSPORT", 00:29:37.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.003 "adrfam": "ipv4", 00:29:37.003 "trsvcid": "$NVMF_PORT", 00:29:37.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.003 "hdgst": ${hdgst:-false}, 00:29:37.003 "ddgst": ${ddgst:-false} 00:29:37.003 }, 00:29:37.003 "method": "bdev_nvme_attach_controller" 00:29:37.003 } 00:29:37.003 EOF 00:29:37.003 )") 00:29:37.003 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:37.003 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:37.003 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:37.003 00:40:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:37.003 "params": { 00:29:37.003 "name": "Nvme1", 00:29:37.003 "trtype": "tcp", 00:29:37.003 "traddr": "10.0.0.2", 00:29:37.003 "adrfam": "ipv4", 00:29:37.003 "trsvcid": "4420", 00:29:37.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.003 "hdgst": false, 00:29:37.003 "ddgst": false 00:29:37.003 }, 00:29:37.003 "method": "bdev_nvme_attach_controller" 00:29:37.003 }' 00:29:37.003 [2024-07-16 00:40:50.608709] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:37.003 [2024-07-16 00:40:50.608764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284437 ] 00:29:37.266 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.266 [2024-07-16 00:40:50.674681] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.266 [2024-07-16 00:40:50.740383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.266 Running I/O for 1 seconds... 00:29:38.651 00:29:38.651 Latency(us) 00:29:38.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.651 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:38.651 Verification LBA range: start 0x0 length 0x4000 00:29:38.651 Nvme1n1 : 1.01 8815.40 34.44 0.00 0.00 14458.92 2102.61 16165.55 00:29:38.651 =================================================================================================================== 00:29:38.651 Total : 8815.40 34.44 0.00 0.00 14458.92 2102.61 16165.55 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1284645 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.651 { 00:29:38.651 "params": { 00:29:38.651 "name": "Nvme$subsystem", 00:29:38.651 "trtype": "$TEST_TRANSPORT", 00:29:38.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.651 "adrfam": "ipv4", 00:29:38.651 "trsvcid": "$NVMF_PORT", 00:29:38.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.651 "hdgst": ${hdgst:-false}, 00:29:38.651 "ddgst": ${ddgst:-false} 00:29:38.651 }, 00:29:38.651 "method": "bdev_nvme_attach_controller" 00:29:38.651 } 00:29:38.651 EOF 00:29:38.651 )") 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:38.651 00:40:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:38.651 "params": { 00:29:38.651 "name": "Nvme1", 00:29:38.651 "trtype": "tcp", 00:29:38.651 "traddr": "10.0.0.2", 00:29:38.651 "adrfam": "ipv4", 00:29:38.651 "trsvcid": "4420", 00:29:38.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.651 "hdgst": false, 00:29:38.651 "ddgst": false 00:29:38.651 }, 00:29:38.651 "method": "bdev_nvme_attach_controller" 00:29:38.651 }' 00:29:38.651 [2024-07-16 00:40:52.080958] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:38.651 [2024-07-16 00:40:52.081017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284645 ] 00:29:38.651 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.651 [2024-07-16 00:40:52.145567] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.651 [2024-07-16 00:40:52.210012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.911 Running I/O for 15 seconds... 00:29:41.496 00:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1284164 00:29:41.496 00:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:41.496 [2024-07-16 00:40:55.046351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.496 [2024-07-16 00:40:55.046391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.496 [2024-07-16 00:40:55.046689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.496 [2024-07-16 00:40:55.046696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.046962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.497 [2024-07-16 00:40:55.046979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.046989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.497 [2024-07-16 00:40:55.046996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.497 [2024-07-16 00:40:55.047012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.497 [2024-07-16 00:40:55.047028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.497 [2024-07-16 00:40:55.047045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.497 [2024-07-16 00:40:55.047061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.497 [2024-07-16 00:40:55.047077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.497 [2024-07-16 00:40:55.047401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.497 [2024-07-16 00:40:55.047408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.047989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.047996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.048005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.048013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.048022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.048029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.048038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.048045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.048054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.048062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.048072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.048079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.048088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.048095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.498 [2024-07-16 00:40:55.048105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.498 [2024-07-16 00:40:55.048112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.499 [2024-07-16 00:40:55.048523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1040420 is same with the state(5) to be set 00:29:41.499 [2024-07-16 00:40:55.048539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.499 [2024-07-16 00:40:55.048545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.499 [2024-07-16 00:40:55.048551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:29:41.499 [2024-07-16 00:40:55.048559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.499 [2024-07-16 00:40:55.048596] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1040420 was disconnected and freed. reset controller. 00:29:41.499 [2024-07-16 00:40:55.052133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.499 [2024-07-16 00:40:55.052179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.499 [2024-07-16 00:40:55.052913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.499 [2024-07-16 00:40:55.052933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.499 [2024-07-16 00:40:55.052941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.499 [2024-07-16 00:40:55.053163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.499 [2024-07-16 00:40:55.053389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.499 [2024-07-16 00:40:55.053398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.499 [2024-07-16 00:40:55.053406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.499 [2024-07-16 00:40:55.056997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.499 [2024-07-16 00:40:55.066234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.499 [2024-07-16 00:40:55.066941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.499 [2024-07-16 00:40:55.066979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.499 [2024-07-16 00:40:55.066989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.499 [2024-07-16 00:40:55.067239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.499 [2024-07-16 00:40:55.067463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.499 [2024-07-16 00:40:55.067472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.499 [2024-07-16 00:40:55.067480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.499 [2024-07-16 00:40:55.071042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.499 [2024-07-16 00:40:55.080057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.499 [2024-07-16 00:40:55.080764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.499 [2024-07-16 00:40:55.080801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.499 [2024-07-16 00:40:55.080811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.499 [2024-07-16 00:40:55.081051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.499 [2024-07-16 00:40:55.081283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.499 [2024-07-16 00:40:55.081292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.499 [2024-07-16 00:40:55.081301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.499 [2024-07-16 00:40:55.084858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.499 [2024-07-16 00:40:55.093885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.499 [2024-07-16 00:40:55.094582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.499 [2024-07-16 00:40:55.094619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.499 [2024-07-16 00:40:55.094630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.499 [2024-07-16 00:40:55.094870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.500 [2024-07-16 00:40:55.095097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.500 [2024-07-16 00:40:55.095106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.500 [2024-07-16 00:40:55.095113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.500 [2024-07-16 00:40:55.098681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.500 [2024-07-16 00:40:55.107699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.500 [2024-07-16 00:40:55.108452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.500 [2024-07-16 00:40:55.108489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.500 [2024-07-16 00:40:55.108499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.500 [2024-07-16 00:40:55.108739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.500 [2024-07-16 00:40:55.108962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.500 [2024-07-16 00:40:55.108970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.500 [2024-07-16 00:40:55.108978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.500 [2024-07-16 00:40:55.112546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.500 [2024-07-16 00:40:55.121563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.500 [2024-07-16 00:40:55.122220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.500 [2024-07-16 00:40:55.122264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.500 [2024-07-16 00:40:55.122274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.500 [2024-07-16 00:40:55.122514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.500 [2024-07-16 00:40:55.122737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.500 [2024-07-16 00:40:55.122746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.500 [2024-07-16 00:40:55.122753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.126318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.135772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.136319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.136356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.136368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.136609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.136832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.136842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.136849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.140417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.149646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.150332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.150368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.150380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.150623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.150846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.150854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.150861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.154434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.163440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.164165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.164202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.164212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.164461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.164685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.164694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.164701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.168262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.177278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.177938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.177975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.177985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.178224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.178458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.178466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.178474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.182034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.191276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.191949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.191986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.192000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.192249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.192474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.192483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.192490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.196053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.205282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.205982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.206019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.206030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.206278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.206502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.206510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.206518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.210074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.219103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.219644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.219680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.219691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.219930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.220153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.220161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.220169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.223742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.232976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.233651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.233689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.233699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.233938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.234161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.234170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.234181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.237750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.246974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.247650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.247688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.247698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.247937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.248160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.248169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.248176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.251747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.260971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.261433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.261454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.261462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.261684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.261904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.261911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.261918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.265474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.274907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.275467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.275484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.275491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.275711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.275930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.275937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.275944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.279501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.288737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.289448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.289485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.289496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.289736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.289959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.289967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.289975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.293542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.302562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.303308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.303346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.303358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.303600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.303823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.303832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.303840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.307410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.316426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.317025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.317044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.317051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.317278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.317498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.317508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.317515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.321067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.330282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.330933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.330970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.330980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.331223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.331455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.331464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.331471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.335030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.344261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.344999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.345036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.345048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.345295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.345519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.345528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.345535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.349094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.358107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.358820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-16 00:40:55.358857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.762 [2024-07-16 00:40:55.358868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.762 [2024-07-16 00:40:55.359107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.762 [2024-07-16 00:40:55.359339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.762 [2024-07-16 00:40:55.359348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.762 [2024-07-16 00:40:55.359355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.762 [2024-07-16 00:40:55.362911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.762 [2024-07-16 00:40:55.371927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.762 [2024-07-16 00:40:55.372570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-16 00:40:55.372609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.763 [2024-07-16 00:40:55.372619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.763 [2024-07-16 00:40:55.372858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.763 [2024-07-16 00:40:55.373081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.763 [2024-07-16 00:40:55.373090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.763 [2024-07-16 00:40:55.373097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.763 [2024-07-16 00:40:55.376669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.763 [2024-07-16 00:40:55.385910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.763 [2024-07-16 00:40:55.386603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-16 00:40:55.386641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:41.763 [2024-07-16 00:40:55.386652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:41.763 [2024-07-16 00:40:55.386891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:41.763 [2024-07-16 00:40:55.387114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.763 [2024-07-16 00:40:55.387123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.763 [2024-07-16 00:40:55.387131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.763 [2024-07-16 00:40:55.390700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.026 [2024-07-16 00:40:55.399740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.026 [2024-07-16 00:40:55.400369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.026 [2024-07-16 00:40:55.400407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.026 [2024-07-16 00:40:55.400419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.026 [2024-07-16 00:40:55.400659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.026 [2024-07-16 00:40:55.400882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.026 [2024-07-16 00:40:55.400891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.026 [2024-07-16 00:40:55.400898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.026 [2024-07-16 00:40:55.404479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.026 [2024-07-16 00:40:55.413708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.026 [2024-07-16 00:40:55.414331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.026 [2024-07-16 00:40:55.414369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.026 [2024-07-16 00:40:55.414381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.026 [2024-07-16 00:40:55.414624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.026 [2024-07-16 00:40:55.414847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.026 [2024-07-16 00:40:55.414856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.026 [2024-07-16 00:40:55.414864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.026 [2024-07-16 00:40:55.418432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.026 [2024-07-16 00:40:55.427666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.026 [2024-07-16 00:40:55.428346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.026 [2024-07-16 00:40:55.428388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.026 [2024-07-16 00:40:55.428398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.026 [2024-07-16 00:40:55.428637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.026 [2024-07-16 00:40:55.428861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.026 [2024-07-16 00:40:55.428870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.026 [2024-07-16 00:40:55.428877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.026 [2024-07-16 00:40:55.432444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.026 [2024-07-16 00:40:55.441676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.026 [2024-07-16 00:40:55.442429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.026 [2024-07-16 00:40:55.442467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.026 [2024-07-16 00:40:55.442479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.026 [2024-07-16 00:40:55.442720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.026 [2024-07-16 00:40:55.442943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.026 [2024-07-16 00:40:55.442952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.026 [2024-07-16 00:40:55.442959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.026 [2024-07-16 00:40:55.446529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.026 [2024-07-16 00:40:55.455552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.026 [2024-07-16 00:40:55.456245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.026 [2024-07-16 00:40:55.456283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.026 [2024-07-16 00:40:55.456295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.026 [2024-07-16 00:40:55.456536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.026 [2024-07-16 00:40:55.456760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.026 [2024-07-16 00:40:55.456769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.026 [2024-07-16 00:40:55.456776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.026 [2024-07-16 00:40:55.460345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.026 [2024-07-16 00:40:55.469371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.026 [2024-07-16 00:40:55.469958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.026 [2024-07-16 00:40:55.469977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.026 [2024-07-16 00:40:55.469984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.026 [2024-07-16 00:40:55.470205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.026 [2024-07-16 00:40:55.470436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.026 [2024-07-16 00:40:55.470445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.026 [2024-07-16 00:40:55.470452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.026 [2024-07-16 00:40:55.474007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.026 [2024-07-16 00:40:55.483225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.026 [2024-07-16 00:40:55.483746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.026 [2024-07-16 00:40:55.483783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.026 [2024-07-16 00:40:55.483793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.026 [2024-07-16 00:40:55.484032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.484264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.484274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.484281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.487850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.497089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.497776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.497814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.497825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.498064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.498296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.498305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.498312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.501866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.510888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.511511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.511530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.511538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.511758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.511978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.511985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.511992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.515551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.524782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.525267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.525282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.525290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.525509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.525727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.525736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.525743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.529297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.538720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.539341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.539379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.539391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.539633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.539856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.539866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.539873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.543442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.552671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.553350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.553388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.553400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.553643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.553866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.553875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.553882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.557450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.566680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.567289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.567326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.567342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.567584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.567808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.567817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.567824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.571389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.580621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.581377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.581414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.581425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.581668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.581891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.581900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.581907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.585478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.594501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.594965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.594983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.594991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.595211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.595436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.595444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.595452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.599005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.608426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.609038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.609054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.609061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.609286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.609507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.609515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.609526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.613079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.622305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.622922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.622959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.622969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.623208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.027 [2024-07-16 00:40:55.623440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.027 [2024-07-16 00:40:55.623449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.027 [2024-07-16 00:40:55.623457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.027 [2024-07-16 00:40:55.627015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.027 [2024-07-16 00:40:55.636256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.027 [2024-07-16 00:40:55.636878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.027 [2024-07-16 00:40:55.636895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.027 [2024-07-16 00:40:55.636903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.027 [2024-07-16 00:40:55.637123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.028 [2024-07-16 00:40:55.637350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.028 [2024-07-16 00:40:55.637358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.028 [2024-07-16 00:40:55.637365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.028 [2024-07-16 00:40:55.640917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.028 [2024-07-16 00:40:55.650142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.028 [2024-07-16 00:40:55.650816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.028 [2024-07-16 00:40:55.650855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.028 [2024-07-16 00:40:55.650865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.028 [2024-07-16 00:40:55.651104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.028 [2024-07-16 00:40:55.651337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.028 [2024-07-16 00:40:55.651347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.028 [2024-07-16 00:40:55.651354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.028 [2024-07-16 00:40:55.654911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.664137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.664847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.664885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.664895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.665135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.665364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.665373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.291 [2024-07-16 00:40:55.665380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.291 [2024-07-16 00:40:55.668941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.677960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.678662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.678699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.678710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.678949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.679172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.679181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.291 [2024-07-16 00:40:55.679189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.291 [2024-07-16 00:40:55.682757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.691790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.692355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.692392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.692404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.692646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.692869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.692878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.291 [2024-07-16 00:40:55.692885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.291 [2024-07-16 00:40:55.696457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.705694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.706356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.706393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.706405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.706652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.706875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.706884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.291 [2024-07-16 00:40:55.706891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.291 [2024-07-16 00:40:55.710457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.719684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.720289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.720307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.720315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.720535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.720755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.720762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.291 [2024-07-16 00:40:55.720769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.291 [2024-07-16 00:40:55.724328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.733551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.734294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.734334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.734346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.734588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.734812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.734820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.291 [2024-07-16 00:40:55.734827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.291 [2024-07-16 00:40:55.738400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.747425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.748129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.748166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.748176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.748422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.748647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.748656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.291 [2024-07-16 00:40:55.748667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.291 [2024-07-16 00:40:55.752221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.291 [2024-07-16 00:40:55.761282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.291 [2024-07-16 00:40:55.761814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.291 [2024-07-16 00:40:55.761831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.291 [2024-07-16 00:40:55.761838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.291 [2024-07-16 00:40:55.762058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.291 [2024-07-16 00:40:55.762284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.291 [2024-07-16 00:40:55.762292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.762300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.765854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.775288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.775825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.775862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.775873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.776112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.776344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.776353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.776361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.779920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.789160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.789862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.789899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.789910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.790148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.790379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.790389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.790396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.793957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.802972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.803653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.803695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.803707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.803947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.804170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.804179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.804186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.807753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.816782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.817516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.817553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.817563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.817802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.818025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.818033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.818041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.821610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.830633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.831315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.831359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.831372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.831612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.831835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.831844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.831851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.835418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.844442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.844970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.844988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.844995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.845216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.845446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.845455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.845462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.849010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.858446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.859063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.859078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.859086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.859310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.859530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.859539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.859546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.863095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.872320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.873014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.873051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.873062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.873309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.873533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.873542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.873550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.877104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.886139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.886759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.886778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.886786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.887005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.887225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.887238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.887245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.890799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.900027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.900610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.900627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.900634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.900854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.901074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.292 [2024-07-16 00:40:55.901081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.292 [2024-07-16 00:40:55.901088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.292 [2024-07-16 00:40:55.904646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.292 [2024-07-16 00:40:55.913864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.292 [2024-07-16 00:40:55.914541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.292 [2024-07-16 00:40:55.914579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.292 [2024-07-16 00:40:55.914590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.292 [2024-07-16 00:40:55.914828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.292 [2024-07-16 00:40:55.915052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.293 [2024-07-16 00:40:55.915061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.293 [2024-07-16 00:40:55.915068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.293 [2024-07-16 00:40:55.918634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:55.927869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:55.928452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:55.928471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:55.928478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:55.928699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.554 [2024-07-16 00:40:55.928919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.554 [2024-07-16 00:40:55.928927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.554 [2024-07-16 00:40:55.928934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.554 [2024-07-16 00:40:55.932492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:55.941719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:55.942290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:55.942306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:55.942318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:55.942537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.554 [2024-07-16 00:40:55.942756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.554 [2024-07-16 00:40:55.942765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.554 [2024-07-16 00:40:55.942772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.554 [2024-07-16 00:40:55.946325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:55.955549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:55.956164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:55.956178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:55.956186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:55.956409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.554 [2024-07-16 00:40:55.956629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.554 [2024-07-16 00:40:55.956637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.554 [2024-07-16 00:40:55.956643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.554 [2024-07-16 00:40:55.960194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:55.969424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:55.970029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:55.970044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:55.970052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:55.970276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.554 [2024-07-16 00:40:55.970498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.554 [2024-07-16 00:40:55.970506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.554 [2024-07-16 00:40:55.970512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.554 [2024-07-16 00:40:55.974064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:55.983289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:55.983965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:55.984002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:55.984013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:55.984260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.554 [2024-07-16 00:40:55.984484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.554 [2024-07-16 00:40:55.984497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.554 [2024-07-16 00:40:55.984505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.554 [2024-07-16 00:40:55.988074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:55.997086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:55.997652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:55.997671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:55.997679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:55.997898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.554 [2024-07-16 00:40:55.998118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.554 [2024-07-16 00:40:55.998125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.554 [2024-07-16 00:40:55.998132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.554 [2024-07-16 00:40:56.001692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:56.010900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:56.011571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:56.011608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:56.011618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:56.011858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.554 [2024-07-16 00:40:56.012081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.554 [2024-07-16 00:40:56.012089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.554 [2024-07-16 00:40:56.012097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.554 [2024-07-16 00:40:56.015665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.554 [2024-07-16 00:40:56.024899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.554 [2024-07-16 00:40:56.025582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-07-16 00:40:56.025619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.554 [2024-07-16 00:40:56.025630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.554 [2024-07-16 00:40:56.025869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.026092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.026100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.026108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.029674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.038899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.039537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.039574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.039584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.039824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.040047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.040056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.040063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.043631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.052854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.053453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.053472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.053480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.053700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.053921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.053929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.053936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.057497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.066714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.067447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.067484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.067494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.067733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.067957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.067965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.067973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.071539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.080793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.081356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.081394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.081406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.081652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.081875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.081884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.081892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.085467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.094697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.095336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.095372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.095384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.095626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.095849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.095858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.095865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.099436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.108661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.109311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.109348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.109358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.109597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.109820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.109829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.109836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.113403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.122635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.123331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.123368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.123380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.123622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.123845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.123854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.123866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.127429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.136662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.137225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.137267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.137279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.137520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.137744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.137752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.137759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.141325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.150602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.151309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.151347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.151357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.151597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.151820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.151829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.151836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.155401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.164421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.165077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.165114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.165124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.165373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.555 [2024-07-16 00:40:56.165597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.555 [2024-07-16 00:40:56.165605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.555 [2024-07-16 00:40:56.165613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.555 [2024-07-16 00:40:56.169170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.555 [2024-07-16 00:40:56.178383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.555 [2024-07-16 00:40:56.179032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-07-16 00:40:56.179073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.555 [2024-07-16 00:40:56.179084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.555 [2024-07-16 00:40:56.179332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.556 [2024-07-16 00:40:56.179556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.556 [2024-07-16 00:40:56.179565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.556 [2024-07-16 00:40:56.179573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.556 [2024-07-16 00:40:56.183132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.817 [2024-07-16 00:40:56.192375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.817 [2024-07-16 00:40:56.193013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-07-16 00:40:56.193050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.817 [2024-07-16 00:40:56.193061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.817 [2024-07-16 00:40:56.193310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.817 [2024-07-16 00:40:56.193535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.817 [2024-07-16 00:40:56.193543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.817 [2024-07-16 00:40:56.193551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.817 [2024-07-16 00:40:56.197108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.817 [2024-07-16 00:40:56.206335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.817 [2024-07-16 00:40:56.206964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-07-16 00:40:56.206981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.817 [2024-07-16 00:40:56.206989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.817 [2024-07-16 00:40:56.207209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.817 [2024-07-16 00:40:56.207435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.817 [2024-07-16 00:40:56.207443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.817 [2024-07-16 00:40:56.207450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.817 [2024-07-16 00:40:56.211002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.817 [2024-07-16 00:40:56.220227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.817 [2024-07-16 00:40:56.220867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-07-16 00:40:56.220905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.817 [2024-07-16 00:40:56.220915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.817 [2024-07-16 00:40:56.221154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.817 [2024-07-16 00:40:56.221394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.817 [2024-07-16 00:40:56.221404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.817 [2024-07-16 00:40:56.221411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.817 [2024-07-16 00:40:56.224976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.817 [2024-07-16 00:40:56.234203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.817 [2024-07-16 00:40:56.234881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-07-16 00:40:56.234918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.817 [2024-07-16 00:40:56.234928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.817 [2024-07-16 00:40:56.235167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.817 [2024-07-16 00:40:56.235400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.817 [2024-07-16 00:40:56.235409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.817 [2024-07-16 00:40:56.235416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.817 [2024-07-16 00:40:56.238977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.817 [2024-07-16 00:40:56.248201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.817 [2024-07-16 00:40:56.248795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-07-16 00:40:56.248813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.817 [2024-07-16 00:40:56.248820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.817 [2024-07-16 00:40:56.249040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.817 [2024-07-16 00:40:56.249267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.817 [2024-07-16 00:40:56.249275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.817 [2024-07-16 00:40:56.249282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.817 [2024-07-16 00:40:56.252834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.817 [2024-07-16 00:40:56.262055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.817 [2024-07-16 00:40:56.262635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-07-16 00:40:56.262650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.262658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.262878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.263097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.263104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.263111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.266668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.275904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.276588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.276625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.276636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.276875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.277098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.277107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.277114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.280682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.289922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.290602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.290639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.290650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.290889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.291113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.291121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.291128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.294695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.303921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.304603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.304640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.304651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.304890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.305113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.305122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.305129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.308696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.317924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.318525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.318543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.318555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.318776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.318996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.319004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.319011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.322569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.331790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.332440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.332477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.332487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.332727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.332950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.332958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.332966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.336531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.345753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.346466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.346503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.346513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.346752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.346975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.346984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.346991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.350560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.359574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.360274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.360311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.360323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.360566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.360789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.360803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.360810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.364381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.373399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.374090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.374127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.374137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.374386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.374611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.374619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.374626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.378184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.387402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.388118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.388155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.388166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.388415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.388639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.388647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.388654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.392210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.401220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.401960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-07-16 00:40:56.401997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.818 [2024-07-16 00:40:56.402008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.818 [2024-07-16 00:40:56.402256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.818 [2024-07-16 00:40:56.402480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.818 [2024-07-16 00:40:56.402489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.818 [2024-07-16 00:40:56.402496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.818 [2024-07-16 00:40:56.406053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.818 [2024-07-16 00:40:56.415063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.818 [2024-07-16 00:40:56.415746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-07-16 00:40:56.415784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.819 [2024-07-16 00:40:56.415794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.819 [2024-07-16 00:40:56.416033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.819 [2024-07-16 00:40:56.416265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.819 [2024-07-16 00:40:56.416274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.819 [2024-07-16 00:40:56.416282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.819 [2024-07-16 00:40:56.419841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.819 [2024-07-16 00:40:56.429065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.819 [2024-07-16 00:40:56.429771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-07-16 00:40:56.429808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.819 [2024-07-16 00:40:56.429818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.819 [2024-07-16 00:40:56.430057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.819 [2024-07-16 00:40:56.430291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.819 [2024-07-16 00:40:56.430300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.819 [2024-07-16 00:40:56.430308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.819 [2024-07-16 00:40:56.433872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.819 [2024-07-16 00:40:56.442885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.819 [2024-07-16 00:40:56.443446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-07-16 00:40:56.443482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:42.819 [2024-07-16 00:40:56.443494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:42.819 [2024-07-16 00:40:56.443735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:42.819 [2024-07-16 00:40:56.443958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.819 [2024-07-16 00:40:56.443967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.819 [2024-07-16 00:40:56.443974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.081 [2024-07-16 00:40:56.447546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.081 [2024-07-16 00:40:56.456782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.081 [2024-07-16 00:40:56.457494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.081 [2024-07-16 00:40:56.457531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.081 [2024-07-16 00:40:56.457542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.081 [2024-07-16 00:40:56.457786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.081 [2024-07-16 00:40:56.458009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.081 [2024-07-16 00:40:56.458018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.081 [2024-07-16 00:40:56.458025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.081 [2024-07-16 00:40:56.461600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.081 [2024-07-16 00:40:56.470635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.081 [2024-07-16 00:40:56.471261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.081 [2024-07-16 00:40:56.471282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.081 [2024-07-16 00:40:56.471290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.081 [2024-07-16 00:40:56.471512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.081 [2024-07-16 00:40:56.471732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.081 [2024-07-16 00:40:56.471740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.081 [2024-07-16 00:40:56.471747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.081 [2024-07-16 00:40:56.475310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.081 [2024-07-16 00:40:56.484551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.081 [2024-07-16 00:40:56.485148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.081 [2024-07-16 00:40:56.485185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.081 [2024-07-16 00:40:56.485196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.081 [2024-07-16 00:40:56.485448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.081 [2024-07-16 00:40:56.485672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.081 [2024-07-16 00:40:56.485681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.081 [2024-07-16 00:40:56.485688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.081 [2024-07-16 00:40:56.489259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.081 [2024-07-16 00:40:56.498485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.081 [2024-07-16 00:40:56.499086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.081 [2024-07-16 00:40:56.499105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.081 [2024-07-16 00:40:56.499113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.081 [2024-07-16 00:40:56.499338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.081 [2024-07-16 00:40:56.499559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.081 [2024-07-16 00:40:56.499567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.081 [2024-07-16 00:40:56.499580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.081 [2024-07-16 00:40:56.503129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.081 [2024-07-16 00:40:56.512342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.081 [2024-07-16 00:40:56.513018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.081 [2024-07-16 00:40:56.513055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.081 [2024-07-16 00:40:56.513065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.081 [2024-07-16 00:40:56.513312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.081 [2024-07-16 00:40:56.513537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.081 [2024-07-16 00:40:56.513546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.081 [2024-07-16 00:40:56.513553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.081 [2024-07-16 00:40:56.517116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.081 [2024-07-16 00:40:56.526143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.081 [2024-07-16 00:40:56.526731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.081 [2024-07-16 00:40:56.526750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.081 [2024-07-16 00:40:56.526757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.081 [2024-07-16 00:40:56.526977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.081 [2024-07-16 00:40:56.527196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.081 [2024-07-16 00:40:56.527204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.527211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.530773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.540012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.540655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.540692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.540703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.540942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.541165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.541174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.541181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.544749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.553984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.554627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.554668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.554678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.554917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.555140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.555149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.555156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.558727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.567967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.568557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.568576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.568584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.568804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.569023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.569030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.569037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.572600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.581822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.582461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.582498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.582508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.582747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.582971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.582979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.582986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.586553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.595794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.596444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.596481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.596491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.596730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.596958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.596966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.596974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.600543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.609772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.610480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.610517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.610528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.610767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.610990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.610999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.611006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.614574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.623584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.624215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.624237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.624245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.624466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.624685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.624692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.624700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.628252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.637469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.638163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.638200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.638211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.638460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.638684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.638692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.638700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.642267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.651289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.651873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.651891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.651899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.652119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.652347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.652355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.652362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.655920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.665148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.665845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.665882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.665893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.666132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.666365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.666375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.082 [2024-07-16 00:40:56.666382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.082 [2024-07-16 00:40:56.669939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.082 [2024-07-16 00:40:56.678953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.082 [2024-07-16 00:40:56.679622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.082 [2024-07-16 00:40:56.679659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.082 [2024-07-16 00:40:56.679669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.082 [2024-07-16 00:40:56.679908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.082 [2024-07-16 00:40:56.680131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.082 [2024-07-16 00:40:56.680140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.083 [2024-07-16 00:40:56.680148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.083 [2024-07-16 00:40:56.683713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-16 00:40:56.692952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.083 [2024-07-16 00:40:56.693542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-16 00:40:56.693560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.083 [2024-07-16 00:40:56.693572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.083 [2024-07-16 00:40:56.693792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.083 [2024-07-16 00:40:56.694012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.083 [2024-07-16 00:40:56.694019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.083 [2024-07-16 00:40:56.694026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.083 [2024-07-16 00:40:56.697588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-16 00:40:56.706809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.083 [2024-07-16 00:40:56.707471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-16 00:40:56.707508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.083 [2024-07-16 00:40:56.707518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.083 [2024-07-16 00:40:56.707758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.083 [2024-07-16 00:40:56.707981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.083 [2024-07-16 00:40:56.707990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.083 [2024-07-16 00:40:56.707997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.344 [2024-07-16 00:40:56.711565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.344 [2024-07-16 00:40:56.720794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.344 [2024-07-16 00:40:56.721357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.721394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.721405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.721644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.721868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.721876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.721883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.725449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.734669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.735394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.735431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.735442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.735681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.735904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.735917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.735925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.739492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.748511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.749236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.749273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.749284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.749523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.749746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.749754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.749762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.753324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.762336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.763006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.763042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.763052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.763302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.763526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.763534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.763541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.767098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.776329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.777045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.777083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.777093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.777342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.777566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.777575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.777582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.781140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.790158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.790882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.790919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.790929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.791168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.791402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.791411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.791418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.794975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.803988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.804624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.804661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.804671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.804910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.805134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.805143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.805150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.808720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.817954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.818634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.818671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.818681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.818921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.819144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.819152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.819159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.822723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.831944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.832542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.832579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.832589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.832833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.833056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.833065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.833072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.836642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.845868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.846549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.846586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.846596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.846836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.847059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.847067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.847075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.850640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.859855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.345 [2024-07-16 00:40:56.860441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.345 [2024-07-16 00:40:56.860460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.345 [2024-07-16 00:40:56.860468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.345 [2024-07-16 00:40:56.860688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.345 [2024-07-16 00:40:56.860907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.345 [2024-07-16 00:40:56.860915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.345 [2024-07-16 00:40:56.860922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.345 [2024-07-16 00:40:56.864480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.345 [2024-07-16 00:40:56.873697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.874263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.874278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.874286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.874505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.874724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.874731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.874743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.346 [2024-07-16 00:40:56.878294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.346 [2024-07-16 00:40:56.887515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.888191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.888228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.888248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.888488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.888711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.888720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.888727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.346 [2024-07-16 00:40:56.892285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.346 [2024-07-16 00:40:56.901508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.902228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.902272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.902282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.902521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.902745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.902754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.902761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.346 [2024-07-16 00:40:56.906323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.346 [2024-07-16 00:40:56.915337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.916052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.916088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.916099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.916347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.916571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.916580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.916587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.346 [2024-07-16 00:40:56.920145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.346 [2024-07-16 00:40:56.929153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.929871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.929915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.929927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.930167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.930400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.930409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.930417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.346 [2024-07-16 00:40:56.933974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.346 [2024-07-16 00:40:56.942990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.943648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.943685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.943696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.943935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.944158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.944167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.944174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.346 [2024-07-16 00:40:56.947735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.346 [2024-07-16 00:40:56.956954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.957631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.957667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.957678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.957917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.958140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.958148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.958156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.346 [2024-07-16 00:40:56.961723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.346 [2024-07-16 00:40:56.970949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.346 [2024-07-16 00:40:56.971546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.346 [2024-07-16 00:40:56.971584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.346 [2024-07-16 00:40:56.971594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.346 [2024-07-16 00:40:56.971834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.346 [2024-07-16 00:40:56.972061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.346 [2024-07-16 00:40:56.972070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.346 [2024-07-16 00:40:56.972077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:56.975652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:56.984898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:56.985585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:56.985622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:56.985632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:56.985872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:56.986095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.608 [2024-07-16 00:40:56.986104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.608 [2024-07-16 00:40:56.986111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:56.989686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:56.998699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:56.999277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:56.999296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:56.999303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:56.999524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:56.999744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.608 [2024-07-16 00:40:56.999751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.608 [2024-07-16 00:40:56.999758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:57.003313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:57.012539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:57.013197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:57.013241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:57.013254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:57.013496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:57.013719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.608 [2024-07-16 00:40:57.013728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.608 [2024-07-16 00:40:57.013735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:57.017302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:57.026541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:57.027248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:57.027285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:57.027297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:57.027538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:57.027761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.608 [2024-07-16 00:40:57.027771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.608 [2024-07-16 00:40:57.027778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:57.031341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:57.040360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:57.040987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:57.041005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:57.041012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:57.041238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:57.041459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.608 [2024-07-16 00:40:57.041468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.608 [2024-07-16 00:40:57.041475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:57.045029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:57.054255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:57.054960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:57.054998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:57.055010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:57.055262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:57.055487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.608 [2024-07-16 00:40:57.055496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.608 [2024-07-16 00:40:57.055504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:57.059067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:57.068104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:57.068794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:57.068831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:57.068846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:57.069086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:57.069318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.608 [2024-07-16 00:40:57.069327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.608 [2024-07-16 00:40:57.069335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.608 [2024-07-16 00:40:57.072907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.608 [2024-07-16 00:40:57.081958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.608 [2024-07-16 00:40:57.082650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.608 [2024-07-16 00:40:57.082688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.608 [2024-07-16 00:40:57.082698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.608 [2024-07-16 00:40:57.082937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.608 [2024-07-16 00:40:57.083161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.083169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.083176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.086752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.095792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.096524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.096562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.096572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.096811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.097034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.097043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.097050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.100625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.109794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.110380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.110418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.110430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.110673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.110896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.110910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.110918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.114558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.123791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.124511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.124549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.124559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.124798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.125022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.125030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.125038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.128608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.137822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.138541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.138579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.138590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.138829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.139052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.139061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.139068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.142636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.151660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.152395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.152432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.152442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.152682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.152905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.152914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.152921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.156490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.165518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.166247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.166284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.166296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.166537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.166760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.166769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.166777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.170338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.179355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.180070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.180107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.180117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.180364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.180588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.180596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.180604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.184164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.193197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.193871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.193908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.193919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.194158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.194391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.194400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.194408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.197967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.207194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.207911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.207948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.207959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.208202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.208436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.208445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.208452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.212011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.221033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.221725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.221763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.221773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.222012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.609 [2024-07-16 00:40:57.222244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.609 [2024-07-16 00:40:57.222253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.609 [2024-07-16 00:40:57.222261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.609 [2024-07-16 00:40:57.225818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.609 [2024-07-16 00:40:57.235040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.609 [2024-07-16 00:40:57.235668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.609 [2024-07-16 00:40:57.235687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.609 [2024-07-16 00:40:57.235695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.609 [2024-07-16 00:40:57.235915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.610 [2024-07-16 00:40:57.236134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.610 [2024-07-16 00:40:57.236142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.610 [2024-07-16 00:40:57.236148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.871 [2024-07-16 00:40:57.239706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.871 [2024-07-16 00:40:57.248926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.871 [2024-07-16 00:40:57.249505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.871 [2024-07-16 00:40:57.249521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.871 [2024-07-16 00:40:57.249529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.871 [2024-07-16 00:40:57.249749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.871 [2024-07-16 00:40:57.249968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.871 [2024-07-16 00:40:57.249975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.871 [2024-07-16 00:40:57.249988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.871 [2024-07-16 00:40:57.253545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.871 [2024-07-16 00:40:57.262757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.871 [2024-07-16 00:40:57.263450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.871 [2024-07-16 00:40:57.263487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.871 [2024-07-16 00:40:57.263497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.871 [2024-07-16 00:40:57.263737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.871 [2024-07-16 00:40:57.263960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.871 [2024-07-16 00:40:57.263969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.871 [2024-07-16 00:40:57.263976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.871 [2024-07-16 00:40:57.267548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.871 [2024-07-16 00:40:57.276570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.871 [2024-07-16 00:40:57.277196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.871 [2024-07-16 00:40:57.277214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.871 [2024-07-16 00:40:57.277222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.871 [2024-07-16 00:40:57.277446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.871 [2024-07-16 00:40:57.277666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.871 [2024-07-16 00:40:57.277674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.871 [2024-07-16 00:40:57.277681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.871 [2024-07-16 00:40:57.281233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.871 [2024-07-16 00:40:57.290464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.871 [2024-07-16 00:40:57.291079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.871 [2024-07-16 00:40:57.291094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.871 [2024-07-16 00:40:57.291101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.871 [2024-07-16 00:40:57.291324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.871 [2024-07-16 00:40:57.291544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.871 [2024-07-16 00:40:57.291553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.871 [2024-07-16 00:40:57.291560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.871 [2024-07-16 00:40:57.295112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.871 [2024-07-16 00:40:57.304341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.871 [2024-07-16 00:40:57.304954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.871 [2024-07-16 00:40:57.304973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.871 [2024-07-16 00:40:57.304981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.871 [2024-07-16 00:40:57.305200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.871 [2024-07-16 00:40:57.305425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.871 [2024-07-16 00:40:57.305435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.871 [2024-07-16 00:40:57.305441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.871 [2024-07-16 00:40:57.308988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.871 [2024-07-16 00:40:57.318212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.871 [2024-07-16 00:40:57.318874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.318912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.318923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.319162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.319394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.319404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.319411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.322971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.332207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.332935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.332973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.332984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.333223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.333455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.333464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.333472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.337030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.346049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.346524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.346542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.346550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.346769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.346993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.347001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.347008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.350562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.359990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.360566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.360582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.360590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.360808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.361027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.361035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.361042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.364591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.373807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.374497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.374534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.374545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.374784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.375008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.375017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.375024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.378599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.387639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.388422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.388460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.388471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.388711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.388934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.388943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.388950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.392526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.401554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.402165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.402184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.402191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.402418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.402639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.402646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.402653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.406207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.415435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.416017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.416032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.416040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.416264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.416484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.416492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.416498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.420052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.429271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.429817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.429831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.429838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.430057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.430282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.430291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.430298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.433855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.443090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.443642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.443657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.443669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.443887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.444107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.444114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.444121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.447682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.872 [2024-07-16 00:40:57.456913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.872 [2024-07-16 00:40:57.457614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.872 [2024-07-16 00:40:57.457651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.872 [2024-07-16 00:40:57.457661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.872 [2024-07-16 00:40:57.457901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.872 [2024-07-16 00:40:57.458125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.872 [2024-07-16 00:40:57.458134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.872 [2024-07-16 00:40:57.458141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.872 [2024-07-16 00:40:57.461708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.873 [2024-07-16 00:40:57.470719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.873 [2024-07-16 00:40:57.471316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.873 [2024-07-16 00:40:57.471335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.873 [2024-07-16 00:40:57.471343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.873 [2024-07-16 00:40:57.471564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.873 [2024-07-16 00:40:57.471783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.873 [2024-07-16 00:40:57.471791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.873 [2024-07-16 00:40:57.471798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.873 [2024-07-16 00:40:57.475353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.873 [2024-07-16 00:40:57.484561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.873 [2024-07-16 00:40:57.485137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.873 [2024-07-16 00:40:57.485152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.873 [2024-07-16 00:40:57.485159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.873 [2024-07-16 00:40:57.485383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.873 [2024-07-16 00:40:57.485603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.873 [2024-07-16 00:40:57.485615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.873 [2024-07-16 00:40:57.485623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.873 [2024-07-16 00:40:57.489188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.873 [2024-07-16 00:40:57.498410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.873 [2024-07-16 00:40:57.498991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.873 [2024-07-16 00:40:57.499028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:43.873 [2024-07-16 00:40:57.499040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:43.873 [2024-07-16 00:40:57.499289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:43.873 [2024-07-16 00:40:57.499513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.873 [2024-07-16 00:40:57.499522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.873 [2024-07-16 00:40:57.499529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.134 [2024-07-16 00:40:57.503088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.134 [2024-07-16 00:40:57.512313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.134 [2024-07-16 00:40:57.512990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.134 [2024-07-16 00:40:57.513027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.134 [2024-07-16 00:40:57.513039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.134 [2024-07-16 00:40:57.513286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.134 [2024-07-16 00:40:57.513510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.134 [2024-07-16 00:40:57.513519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.134 [2024-07-16 00:40:57.513526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.134 [2024-07-16 00:40:57.517087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.134 [2024-07-16 00:40:57.526314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.134 [2024-07-16 00:40:57.527038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.134 [2024-07-16 00:40:57.527075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.527087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.527333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.527558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.527566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.527573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.531133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.540159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.540685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.540704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.540712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.540932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.541151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.541160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.541167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.544726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.554161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.554657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.554673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.554680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.554899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.555119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.555126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.555133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.558691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.568121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.568751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.568788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.568799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.569038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.569269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.569278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.569286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.572847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.582083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.582771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.582808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.582818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.583062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.583293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.583302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.583310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.586870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.595900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.596501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.596538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.596548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.596788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.597011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.597020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.597028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.600593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.609822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.610546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.610583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.610593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.610832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.611055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.611064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.611071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.614639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.623658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.624128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.624146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.624153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.624380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.624599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.624607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.624621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.628172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.637597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.638286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.638324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.638336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.638577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.638800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.638809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.638816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.642383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.651403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.651914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.651932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.651939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.652159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.652387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.652395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.652403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.655955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.665391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.135 [2024-07-16 00:40:57.666072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.135 [2024-07-16 00:40:57.666109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.135 [2024-07-16 00:40:57.666119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.135 [2024-07-16 00:40:57.666365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.135 [2024-07-16 00:40:57.666590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.135 [2024-07-16 00:40:57.666598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.135 [2024-07-16 00:40:57.666606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.135 [2024-07-16 00:40:57.670166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.135 [2024-07-16 00:40:57.679187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.136 [2024-07-16 00:40:57.679911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.136 [2024-07-16 00:40:57.679952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.136 [2024-07-16 00:40:57.679963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.136 [2024-07-16 00:40:57.680202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.136 [2024-07-16 00:40:57.680440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.136 [2024-07-16 00:40:57.680449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.136 [2024-07-16 00:40:57.680457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.136 [2024-07-16 00:40:57.684015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.136 [2024-07-16 00:40:57.693038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.136 [2024-07-16 00:40:57.693602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.136 [2024-07-16 00:40:57.693640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.136 [2024-07-16 00:40:57.693652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.136 [2024-07-16 00:40:57.693892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.136 [2024-07-16 00:40:57.694116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.136 [2024-07-16 00:40:57.694124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.136 [2024-07-16 00:40:57.694131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.136 [2024-07-16 00:40:57.697695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.136 [2024-07-16 00:40:57.706921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.136 [2024-07-16 00:40:57.707618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.136 [2024-07-16 00:40:57.707656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.136 [2024-07-16 00:40:57.707666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.136 [2024-07-16 00:40:57.707905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.136 [2024-07-16 00:40:57.708128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.136 [2024-07-16 00:40:57.708137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.136 [2024-07-16 00:40:57.708144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.136 [2024-07-16 00:40:57.711710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.136 [2024-07-16 00:40:57.720729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.136 [2024-07-16 00:40:57.721473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.136 [2024-07-16 00:40:57.721510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.136 [2024-07-16 00:40:57.721520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.136 [2024-07-16 00:40:57.721759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.136 [2024-07-16 00:40:57.721987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.136 [2024-07-16 00:40:57.721996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.136 [2024-07-16 00:40:57.722003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.136 [2024-07-16 00:40:57.725571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.136 [2024-07-16 00:40:57.734580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.136 [2024-07-16 00:40:57.735253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.136 [2024-07-16 00:40:57.735289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.136 [2024-07-16 00:40:57.735301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.136 [2024-07-16 00:40:57.735541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.136 [2024-07-16 00:40:57.735764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.136 [2024-07-16 00:40:57.735773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.136 [2024-07-16 00:40:57.735780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.136 [2024-07-16 00:40:57.739341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.136 [2024-07-16 00:40:57.748399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.136 [2024-07-16 00:40:57.749070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.136 [2024-07-16 00:40:57.749107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.136 [2024-07-16 00:40:57.749117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.136 [2024-07-16 00:40:57.749366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.136 [2024-07-16 00:40:57.749591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.136 [2024-07-16 00:40:57.749599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.136 [2024-07-16 00:40:57.749606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.136 [2024-07-16 00:40:57.753163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.136 [2024-07-16 00:40:57.762387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.136 [2024-07-16 00:40:57.763065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.136 [2024-07-16 00:40:57.763103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.136 [2024-07-16 00:40:57.763113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.136 [2024-07-16 00:40:57.763362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.136 [2024-07-16 00:40:57.763587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.136 [2024-07-16 00:40:57.763595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.136 [2024-07-16 00:40:57.763602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.767169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.776188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.776909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.776947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.776957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.777196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.777429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.777439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.777446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.781002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.790027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.790719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.790756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.790766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.791005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.791228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.791247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.791255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.794817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.803834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.804557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.804595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.804605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.804844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.805067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.805076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.805083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.808653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.817667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.818334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.818371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.818386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.818625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.818849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.818857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.818865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.822431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.831653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.832330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.832367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.832379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.832621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.832844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.832853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.832860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.836432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.845660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.846309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.846346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.846357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.846596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.846819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.846827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.846835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.850405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.859631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.860333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.860370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.860382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.860623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.860846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.860859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.860866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.864437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.873457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.874129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.874166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.874176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.874425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.874649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.874657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.874665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.878221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.887440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.888157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.888194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.888206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.888467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.888691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.888699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.888707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.892268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.901288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.901997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.902034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.398 [2024-07-16 00:40:57.902044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.398 [2024-07-16 00:40:57.902291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.398 [2024-07-16 00:40:57.902516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.398 [2024-07-16 00:40:57.902524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.398 [2024-07-16 00:40:57.902532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.398 [2024-07-16 00:40:57.906090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.398 [2024-07-16 00:40:57.915110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.398 [2024-07-16 00:40:57.915826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.398 [2024-07-16 00:40:57.915863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:57.915874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:57.916113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:57.916345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:57.916354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:57.916362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:57.919917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:57.928931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:57.929514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:57.929533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:57.929541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:57.929761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:57.929981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:57.929989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:57.929995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:57.933554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:57.942765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:57.943366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:57.943382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:57.943389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:57.943609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:57.943828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:57.943835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:57.943842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:57.947398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:57.956619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:57.957319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:57.957357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:57.957367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:57.957617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:57.957840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:57.957848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:57.957856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:57.961423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:57.970436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:57.971152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:57.971189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:57.971201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:57.971454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:57.971678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:57.971686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:57.971694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:57.975251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:57.984268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:57.984865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:57.984902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:57.984913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:57.985151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:57.985384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:57.985393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:57.985400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:57.988964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:57.998197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:57.998812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:57.998849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:57.998860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:57.999099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:57.999332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:57.999341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:57.999353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:58.002911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:58.012139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:58.012855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:58.012893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:58.012903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:58.013142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:58.013375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:58.013385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:58.013392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.399 [2024-07-16 00:40:58.016949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.399 [2024-07-16 00:40:58.025976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.399 [2024-07-16 00:40:58.026653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.399 [2024-07-16 00:40:58.026690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.399 [2024-07-16 00:40:58.026700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.399 [2024-07-16 00:40:58.026940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.399 [2024-07-16 00:40:58.027163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.399 [2024-07-16 00:40:58.027172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.399 [2024-07-16 00:40:58.027180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.030751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 [2024-07-16 00:40:58.039975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.040661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.040698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.040708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.040948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.041171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.041179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.041187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1284164 Killed "${NVMF_APP[@]}" "$@" 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.661 [2024-07-16 00:40:58.044760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1285868 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1285868 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1285868 ']' 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.661 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.661 [2024-07-16 00:40:58.053784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.054517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.054554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.054565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.054804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.055027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.055035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.055043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.058610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 [2024-07-16 00:40:58.067670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.068333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.068371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.068383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.068623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.068847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.068855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.068862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.072433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 [2024-07-16 00:40:58.081659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.082125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.082146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.082159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.082386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.082607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.082615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.082622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.086176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 [2024-07-16 00:40:58.095611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.096296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.096333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.096345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.096586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.096809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.096819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.096827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.100400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 [2024-07-16 00:40:58.100533] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:44.661 [2024-07-16 00:40:58.100577] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.661 [2024-07-16 00:40:58.109423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.110106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.110144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.110155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.110402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.110626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.110635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.110642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.114200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 [2024-07-16 00:40:58.123431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.124130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.124167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.124182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.124430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.124654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.124663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.124670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.128227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.661 [2024-07-16 00:40:58.137468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.138176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.138214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.138224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.138470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.138700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.138709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.661 [2024-07-16 00:40:58.138716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.661 [2024-07-16 00:40:58.142274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.661 [2024-07-16 00:40:58.151385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.661 [2024-07-16 00:40:58.152024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.661 [2024-07-16 00:40:58.152043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.661 [2024-07-16 00:40:58.152050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.661 [2024-07-16 00:40:58.152277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.661 [2024-07-16 00:40:58.152498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.661 [2024-07-16 00:40:58.152506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.152513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.156066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.165295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.166003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.166040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.166050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.166297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.166521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.166533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.166541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.170098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.179120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.179714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.179733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.179741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.179962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.180181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.180189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.180196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.183747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.189834] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:44.662 [2024-07-16 00:40:58.193038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.193630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.193647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.193654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.193874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.194094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.194101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.194108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.197669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.206891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.207521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.207537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.207545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.207765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.207985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.207993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.208000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.211559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.220788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.221515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.221555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.221566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.221810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.222033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.222042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.222050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.225616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.234650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.235080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.235101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.235109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.235337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.235557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.235565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.235572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.239129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.243157] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.662 [2024-07-16 00:40:58.243182] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.662 [2024-07-16 00:40:58.243188] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.662 [2024-07-16 00:40:58.243193] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.662 [2024-07-16 00:40:58.243198] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.662 [2024-07-16 00:40:58.243244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.662 [2024-07-16 00:40:58.243376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.662 [2024-07-16 00:40:58.243509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.662 [2024-07-16 00:40:58.248561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.249249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.249288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.249301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.249544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.249772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.249781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.249789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.253353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.262368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.263065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.263104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.263115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.263364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.263588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.263597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.263605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.267161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.276181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.276796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.276815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.276823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.277045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.662 [2024-07-16 00:40:58.277271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.662 [2024-07-16 00:40:58.277281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.662 [2024-07-16 00:40:58.277289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.662 [2024-07-16 00:40:58.280839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.662 [2024-07-16 00:40:58.290077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.662 [2024-07-16 00:40:58.290570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.662 [2024-07-16 00:40:58.290609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.662 [2024-07-16 00:40:58.290620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.662 [2024-07-16 00:40:58.290861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.291084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.291094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.291103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.294671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.303909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.304496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.304515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.304523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.304744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.304963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.304971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.304978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.308525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.317749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.318244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.318260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.318267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.318486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.318706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.318714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.318720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.322275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.331705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.332347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.332384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.332396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.332639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.332863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.332872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.332879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.336445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.345673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.346329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.346366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.346385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.346624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.346848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.346857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.346864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.350431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.359659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.360315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.360353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.360364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.360603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.360826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.360835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.360842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.364409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.373636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.374324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.374361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.374373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.374616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.374840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.374848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.374856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.378422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.387442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.388173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.388210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.388221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.388477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.388702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.388715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.388722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.392286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.401298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.402027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.402064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.402075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.402322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.924 [2024-07-16 00:40:58.402546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.924 [2024-07-16 00:40:58.402555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.924 [2024-07-16 00:40:58.402562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.924 [2024-07-16 00:40:58.406122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.924 [2024-07-16 00:40:58.415141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.924 [2024-07-16 00:40:58.415824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.924 [2024-07-16 00:40:58.415862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.924 [2024-07-16 00:40:58.415873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.924 [2024-07-16 00:40:58.416112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.416343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.416353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.416360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.419918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.429146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.429838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.429876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.429886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.430126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.430356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.430366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.430373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.433929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.443152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.443645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.443663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.443671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.443891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.444110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.444118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.444125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.447684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.457107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.457752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.457767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.457775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.457994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.458213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.458220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.458227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.461785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.470998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.471429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.471444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.471451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.471670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.471890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.471897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.471904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.475452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.484880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.485566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.485604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.485614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.485858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.486082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.486090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.486098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.489676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.498697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.499308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.499346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.499356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.499595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.499819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.499828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.499835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.503400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.512632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.513227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.513252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.513260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.513479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.513698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.513706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.513713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.517270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.526490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.527025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.527063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.527074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.527320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.527545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.527553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.527565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.531126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.925 [2024-07-16 00:40:58.540352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.925 [2024-07-16 00:40:58.540943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.925 [2024-07-16 00:40:58.540980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:44.925 [2024-07-16 00:40:58.540991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:44.925 [2024-07-16 00:40:58.541238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:44.925 [2024-07-16 00:40:58.541463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.925 [2024-07-16 00:40:58.541471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.925 [2024-07-16 00:40:58.541479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.925 [2024-07-16 00:40:58.545035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.187 [2024-07-16 00:40:58.554269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.187 [2024-07-16 00:40:58.554965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.187 [2024-07-16 00:40:58.555003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.187 [2024-07-16 00:40:58.555013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.187 [2024-07-16 00:40:58.555261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.187 [2024-07-16 00:40:58.555485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.187 [2024-07-16 00:40:58.555494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.187 [2024-07-16 00:40:58.555502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.187 [2024-07-16 00:40:58.559061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.187 [2024-07-16 00:40:58.568086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.187 [2024-07-16 00:40:58.568735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.187 [2024-07-16 00:40:58.568774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.187 [2024-07-16 00:40:58.568785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.187 [2024-07-16 00:40:58.569024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.187 [2024-07-16 00:40:58.569255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.187 [2024-07-16 00:40:58.569265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.187 [2024-07-16 00:40:58.569273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.187 [2024-07-16 00:40:58.572827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.187 [2024-07-16 00:40:58.582054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.187 [2024-07-16 00:40:58.582730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.187 [2024-07-16 00:40:58.582771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.187 [2024-07-16 00:40:58.582782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.187 [2024-07-16 00:40:58.583021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.187 [2024-07-16 00:40:58.583252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.187 [2024-07-16 00:40:58.583261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.187 [2024-07-16 00:40:58.583269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.187 [2024-07-16 00:40:58.586828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.596061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.596754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.596791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.596802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.597041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.597271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.597280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.597288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.600840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.610062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.610798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.610836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.610846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.611086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.611317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.611327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.611335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.614891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.623914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.624614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.624651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.624661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.624901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.625130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.625139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.625146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.628710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.637728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.638205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.638222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.638235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.638456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.638675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.638684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.638691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.642247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.651676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.652117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.652132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.652139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.652364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.652584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.652592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.652599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.656155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.665599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.666223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.666243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.666251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.666471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.666690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.666697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.666704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.670267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.679490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.680105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.680120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.680127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.680350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.680570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.680578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.680585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.684133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.693358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.693786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.693801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.693808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.694028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.694252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.694260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.694267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.697819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.707249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.707826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.707841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.707849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.708067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.708296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.708306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.708313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.711864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.721079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.721674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.721690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.721701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.721919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.722139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.722146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.188 [2024-07-16 00:40:58.722154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.188 [2024-07-16 00:40:58.725711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.188 [2024-07-16 00:40:58.734930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.188 [2024-07-16 00:40:58.735636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.188 [2024-07-16 00:40:58.735675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.188 [2024-07-16 00:40:58.735686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.188 [2024-07-16 00:40:58.735927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.188 [2024-07-16 00:40:58.736151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.188 [2024-07-16 00:40:58.736161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.189 [2024-07-16 00:40:58.736169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.189 [2024-07-16 00:40:58.739739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.189 [2024-07-16 00:40:58.748765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.189 [2024-07-16 00:40:58.749542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.189 [2024-07-16 00:40:58.749580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.189 [2024-07-16 00:40:58.749592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.189 [2024-07-16 00:40:58.749831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.189 [2024-07-16 00:40:58.750055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.189 [2024-07-16 00:40:58.750064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.189 [2024-07-16 00:40:58.750071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.189 [2024-07-16 00:40:58.753640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.189 [2024-07-16 00:40:58.762688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.189 [2024-07-16 00:40:58.763226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.189 [2024-07-16 00:40:58.763270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.189 [2024-07-16 00:40:58.763280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.189 [2024-07-16 00:40:58.763519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.189 [2024-07-16 00:40:58.763743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.189 [2024-07-16 00:40:58.763756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.189 [2024-07-16 00:40:58.763764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.189 [2024-07-16 00:40:58.767327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.189 [2024-07-16 00:40:58.776555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.189 [2024-07-16 00:40:58.777138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.189 [2024-07-16 00:40:58.777156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.189 [2024-07-16 00:40:58.777164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.189 [2024-07-16 00:40:58.777391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.189 [2024-07-16 00:40:58.777611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.189 [2024-07-16 00:40:58.777619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.189 [2024-07-16 00:40:58.777625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.189 [2024-07-16 00:40:58.781177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.189 [2024-07-16 00:40:58.790418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.189 [2024-07-16 00:40:58.791103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.189 [2024-07-16 00:40:58.791140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.189 [2024-07-16 00:40:58.791151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.189 [2024-07-16 00:40:58.791399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.189 [2024-07-16 00:40:58.791623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.189 [2024-07-16 00:40:58.791631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.189 [2024-07-16 00:40:58.791639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.189 [2024-07-16 00:40:58.795196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.189 [2024-07-16 00:40:58.804216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.189 [2024-07-16 00:40:58.804906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.189 [2024-07-16 00:40:58.804943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.189 [2024-07-16 00:40:58.804954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.189 [2024-07-16 00:40:58.805194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.189 [2024-07-16 00:40:58.805427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.189 [2024-07-16 00:40:58.805438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.189 [2024-07-16 00:40:58.805445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.189 [2024-07-16 00:40:58.809007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 [2024-07-16 00:40:58.818036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 [2024-07-16 00:40:58.818731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.818769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.818780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.819019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.819251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.819262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.819269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.822829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 [2024-07-16 00:40:58.831858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 [2024-07-16 00:40:58.832559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.832597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.832608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.832848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.833072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.833081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.833088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.836655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 [2024-07-16 00:40:58.845675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 [2024-07-16 00:40:58.846100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.846118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.846125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.846351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.846571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.846580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.846588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.850137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 [2024-07-16 00:40:58.859578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 [2024-07-16 00:40:58.860014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.860029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.860036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.860264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.860485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.860492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.860499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.864051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:45.451 [2024-07-16 00:40:58.873492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.451 [2024-07-16 00:40:58.874214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.874259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.874271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.874514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.874737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.874746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.874753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.878319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 [2024-07-16 00:40:58.887340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 [2024-07-16 00:40:58.887916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.887954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.887965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.888204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.888435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.888445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.888453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.892019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 [2024-07-16 00:40:58.901249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 [2024-07-16 00:40:58.901873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.901891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.901898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.902122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.902350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.902360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.902368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.905923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.451 [2024-07-16 00:40:58.915146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.451 [2024-07-16 00:40:58.915773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.451 [2024-07-16 00:40:58.915789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.451 [2024-07-16 00:40:58.915797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.451 [2024-07-16 00:40:58.916016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.451 [2024-07-16 00:40:58.916241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.451 [2024-07-16 00:40:58.916249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.451 [2024-07-16 00:40:58.916256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.451 [2024-07-16 00:40:58.918228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.451 [2024-07-16 00:40:58.919810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.451 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.452 [2024-07-16 00:40:58.929031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.452 [2024-07-16 00:40:58.929569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.452 [2024-07-16 00:40:58.929606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.452 [2024-07-16 00:40:58.929617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.452 [2024-07-16 00:40:58.929856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.452 [2024-07-16 00:40:58.930080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.452 [2024-07-16 00:40:58.930089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.452 [2024-07-16 00:40:58.930096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.452 [2024-07-16 00:40:58.933665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.452 [2024-07-16 00:40:58.942901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.452 [2024-07-16 00:40:58.943615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.452 [2024-07-16 00:40:58.943653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.452 [2024-07-16 00:40:58.943663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.452 [2024-07-16 00:40:58.943902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.452 [2024-07-16 00:40:58.944125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.452 [2024-07-16 00:40:58.944134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.452 [2024-07-16 00:40:58.944142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.452 [2024-07-16 00:40:58.947712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.452 Malloc0 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.452 [2024-07-16 00:40:58.956731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.452 [2024-07-16 00:40:58.957541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.452 [2024-07-16 00:40:58.957578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.452 [2024-07-16 00:40:58.957590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.452 [2024-07-16 00:40:58.957831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.452 [2024-07-16 00:40:58.958054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.452 [2024-07-16 00:40:58.958063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.452 [2024-07-16 00:40:58.958071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.452 [2024-07-16 00:40:58.961636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.452 [2024-07-16 00:40:58.970657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.452 [2024-07-16 00:40:58.971112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.452 [2024-07-16 00:40:58.971130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f6e0 with addr=10.0.0.2, port=4420 00:29:45.452 [2024-07-16 00:40:58.971138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f6e0 is same with the state(5) to be set 00:29:45.452 [2024-07-16 00:40:58.971364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f6e0 (9): Bad file descriptor 00:29:45.452 [2024-07-16 00:40:58.971584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.452 [2024-07-16 00:40:58.971592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.452 [2024-07-16 00:40:58.971599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.452 [2024-07-16 00:40:58.975158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.452 [2024-07-16 00:40:58.983093] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.452 [2024-07-16 00:40:58.984584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.452 00:40:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1284645 00:29:45.452 [2024-07-16 00:40:59.033869] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:55.447 00:29:55.447 Latency(us) 00:29:55.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.447 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:55.447 Verification LBA range: start 0x0 length 0x4000 00:29:55.447 Nvme1n1 : 15.00 8428.29 32.92 9680.31 0.00 7042.70 785.07 14308.69 00:29:55.447 =================================================================================================================== 00:29:55.447 Total : 8428.29 32.92 9680.31 0.00 7042.70 785.07 14308.69 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:55.447 rmmod nvme_tcp 00:29:55.447 rmmod nvme_fabrics 00:29:55.447 rmmod nvme_keyring 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1285868 ']' 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1285868 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1285868 ']' 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1285868 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1285868 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1285868' 00:29:55.447 killing process with pid 1285868 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1285868 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1285868 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.447 00:41:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.424 00:41:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.424 00:29:56.424 real 0m28.837s 00:29:56.424 user 1m3.019s 00:29:56.424 sys 0m7.943s 00:29:56.424 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:56.424 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:56.424 ************************************ 00:29:56.424 END TEST nvmf_bdevperf 00:29:56.424 ************************************ 00:29:56.685 00:41:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:56.685 00:41:10 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:56.685 00:41:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:56.685 00:41:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.685 00:41:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.685 ************************************ 00:29:56.685 START TEST nvmf_target_disconnect 00:29:56.685 ************************************ 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:56.685 * Looking for test storage... 00:29:56.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:56.685 00:41:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.686 00:41:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:04.828 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:04.829 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:04.829 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:04.829 Found net devices under 0000:31:00.0: cvl_0_0 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:04.829 Found net devices under 0000:31:00.1: cvl_0_1 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.829 00:41:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:04.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:30:04.829 00:30:04.829 --- 10.0.0.2 ping statistics --- 00:30:04.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.829 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:30:04.829 00:30:04.829 --- 10.0.0.1 ping statistics --- 00:30:04.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.829 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:04.829 ************************************ 00:30:04.829 START TEST nvmf_target_disconnect_tc1 00:30:04.829 ************************************ 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.829 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.830 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.830 [2024-07-16 00:41:18.406510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.830 [2024-07-16 00:41:18.406586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x818650 with addr=10.0.0.2, port=4420 00:30:04.830 [2024-07-16 00:41:18.406622] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:04.830 [2024-07-16 00:41:18.406633] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:04.830 [2024-07-16 00:41:18.406640] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:04.830 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:04.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:04.830 Initializing NVMe Controllers 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.830 00:30:04.830 real 0m0.121s 00:30:04.830 user 0m0.052s 00:30:04.830 sys 0m0.069s 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.830 ************************************ 00:30:04.830 END TEST nvmf_target_disconnect_tc1 00:30:04.830 ************************************ 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:04.830 00:41:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:05.091 ************************************ 00:30:05.091 START TEST nvmf_target_disconnect_tc2 00:30:05.091 ************************************ 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1292289 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1292289 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1292289 ']' 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.091 00:41:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.091 [2024-07-16 00:41:18.562980] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:30:05.091 [2024-07-16 00:41:18.563047] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.091 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.091 [2024-07-16 00:41:18.662324] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.353 [2024-07-16 00:41:18.756533] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.353 [2024-07-16 00:41:18.756599] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.353 [2024-07-16 00:41:18.756607] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.353 [2024-07-16 00:41:18.756615] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.353 [2024-07-16 00:41:18.756621] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.353 [2024-07-16 00:41:18.756800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.353 [2024-07-16 00:41:18.756958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.353 [2024-07-16 00:41:18.757124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.353 [2024-07-16 00:41:18.757125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 Malloc0 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 [2024-07-16 00:41:19.434355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 [2024-07-16 00:41:19.474728] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1292614 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:05.926 00:41:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.926 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.492 00:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1292289 00:30:08.492 00:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Read completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 Write completed with error (sct=0, sc=8) 00:30:08.492 starting I/O failed 00:30:08.492 [2024-07-16 00:41:21.509050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.492 [2024-07-16 00:41:21.509590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.509626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.509962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.509975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.510208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.510221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.510681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.510718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.510986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.511000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.511259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.511279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.511643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.511655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.512029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.512040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.512507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.512543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.512907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.512922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.513134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.513146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.513293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.513305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.513594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.513607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.513917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.513929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.514268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.514280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.514660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.514672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.514872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-16 00:41:21.514883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-16 00:41:21.515212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.515224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.515551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.515562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.515915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.515927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.516120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.516132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.516348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.516362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.516722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.516735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.517067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.517079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.517452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.517464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.517816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.517828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.518189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.518201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.518449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.518463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.518839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.518850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.519175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.519187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.519557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.519571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.519889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.519900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.520259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.520271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.520690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.520701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.521075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.521087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.521447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.521459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.521796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.521808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.522178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.522190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.522494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.522505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.522746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.522757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.523079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.523090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.523488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.523499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.523738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.523748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.524042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.524052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.524382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.524394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.524778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.524788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.525157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.525167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.525583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.525594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.525937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.525948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.526312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.526324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.526647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.526658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.527037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.527048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.527397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.527408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.527745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.527755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.527955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.527966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.528284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.528296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.528651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.528661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-16 00:41:21.528973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-16 00:41:21.528983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.529356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.529367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.529727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.529740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.529953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.529967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.530322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.530336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.530689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.530702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.531056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.531069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.531416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.531430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.531774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.531787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.532133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.532146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.532524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.532538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.532916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.532929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.533262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.533275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.533600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.533615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.533949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.533962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.534332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.534346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.534694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.534707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.535042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.535055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.535268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.535283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.535637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.535650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.535982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.535995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.536365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.536379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.536731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.536744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.537116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.537129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.537488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.537501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.537851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.537864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.538204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.538216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.538570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.538583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.538899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.538913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.539286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.539299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.539680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.539693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.540062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.540075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.540429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.540442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.540821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.540835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.541074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.541086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.541341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.541358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.541707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.541726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.542142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.542160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.542553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.542572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.542909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.542926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.543277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.543294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-16 00:41:21.543659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-16 00:41:21.543676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.544019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.544036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.544407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.544424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.544795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.544812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.545159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.545176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.545537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.545554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.545915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.545932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.546311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.546328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.546656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.546672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.547020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.547037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.547436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.547453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.547801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.547817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.548188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.548207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.548521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.548539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.548876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.548894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.549210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.549227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.549621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.549638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.549993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.550010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.550343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.550361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.550784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.550801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.551171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.551189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.551509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.551527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.551874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.551891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.552129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.552145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.552488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.552510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.552904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.552926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.553331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.553353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.553751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.553773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.554170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.554191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.554583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.554605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.554959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.554981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.555383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.555405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.555742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.555762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.556123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.556144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.556512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.556534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.556909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.556930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.557298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.557319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.557720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.557741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.558074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.558095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.558566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.558589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.558919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-16 00:41:21.558940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-16 00:41:21.559289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.559311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.559667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.559688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.560054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.560076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.560504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.560525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.560900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.560921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.561304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.561327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.561720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.561741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.562109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.562130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.562516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.562539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.562940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.562961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.563335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.563358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.563764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.563789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.564136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.564158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.564416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.564441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.564835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.564864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.565260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.565290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.565570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.565600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.565990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.566018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.566390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.566420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.566696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.566725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.566990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.567021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.567425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.567456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.567837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.567866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.568264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.568294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.568696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.568724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.569108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.569137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.569474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.569504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.569897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-16 00:41:21.569925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-16 00:41:21.570316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.570346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.570700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.570729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.571132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.571161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.571556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.571585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.571870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.571901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.572256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.572286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.572705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.572734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.573135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.573163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.573428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.573458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.573832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.573860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.574264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.574294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.574627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.574655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.575056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.575084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.575519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.575549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.575942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.575970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.576239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.576270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.576652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.576680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.577086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.577114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.577445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.577475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.577893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.577921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.578286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.578315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.578706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.578735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.579064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.579092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.579478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.579513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.579887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.579915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.580309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.580339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.580708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.580736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.581120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.581149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.581525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.581555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.581879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.581907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.582312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.582342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.582756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.582785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.583181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.583210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.583601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.583631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.583914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.583944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.584344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.584374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.584758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.584786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.585181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.585209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.585601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.585630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.586003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.586032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-16 00:41:21.586407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-16 00:41:21.586437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.586798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.586826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.587210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.587258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.587632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.587660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.588056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.588084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.588462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.588492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.588880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.588908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.589303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.589333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.589714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.589743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.590140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.590169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.590527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.590558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.590936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.590965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.591358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.591388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.591779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.591807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.592187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.592215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.592607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.592636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.593031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.593060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.593441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.593470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.593869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.593897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.594290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.594319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.594705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.594734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.594989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.595019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.595382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.595412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.595788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.595823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.596182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.596211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.596596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.596624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.597007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.597035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.597431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.597461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.597859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.597888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.598266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.598296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.598642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.598671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.599058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.599087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.599452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.599483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.599877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.599905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.600298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.600327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.600728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.600757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.601152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.601180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.601574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.601603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.601984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.602012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.602387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.602417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.602810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.602838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-16 00:41:21.603219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-16 00:41:21.603267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.603652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.603681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.604041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.604069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.604441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.604471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.604860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.604888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.605285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.605315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.605697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.605725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.606119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.606148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.606520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.606549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.606932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.606961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.607354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.607383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.607779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.607807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.608197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.608225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.608608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.608637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.608993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.609021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.609393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.609422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.609815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.609844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.610112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.610141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.610422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.610454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.610847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.610876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.611267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.611297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.611679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.611707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.612071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.612106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.612504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.612534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.612928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.612956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.613353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.613381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.613778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.613808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.614188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.614217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.614596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.614626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.614989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.615018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.615404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.615435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.615863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.615892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.616290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.616319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.616720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.616748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.617014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.617047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.617443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.617472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.617835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.617864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.618145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.618172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.618351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.618383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.618783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.618812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-16 00:41:21.619067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-16 00:41:21.619097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.619454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.619484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.619740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.619769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.620192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.620220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.620615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.620645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.621028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.621056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.621461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.621491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.621895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.621923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.622314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.622343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.622739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.622769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.623165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.623193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.623581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.623611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.624008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.624037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.624396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.624426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.624693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.624724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.625115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.625144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.625426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.625456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.625848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.625877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.626278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.626307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.626653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.626682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.627066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.627095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.627496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.627526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.627914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.627948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.628330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.628359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.628783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.628812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.629203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.629240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.629615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.629644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.629923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.629955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.630347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.630377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.630769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.630797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.631203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.631239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.631630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.631658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.632039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.632068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.632471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.632501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.632893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.632921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.633306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.633335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.633512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.633544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.633939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.633967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-16 00:41:21.634347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-16 00:41:21.634376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.634775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.634803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.635040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.635067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.635461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.635490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.635889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.635918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.636312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.636342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.636738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.636766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.637163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.637192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.637458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.637488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.637911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.637940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.638339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.638370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.638784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.638813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.639195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.639224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.639546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.639575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.639815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.639846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.640224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.640265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.640687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.640717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.641117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.641146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.641536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.641566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.641959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.641988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.642384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.642413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.642801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.642830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.643266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.643298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.643730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.643758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.644088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.644122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.644515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.644544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.644914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.644942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.645327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.645356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.645754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.645782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.646178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.646206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.646583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.646612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.646880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.646911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.647309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.647340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.647625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.647652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.648047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.648075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.648446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.648476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.648849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.648877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-16 00:41:21.649161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-16 00:41:21.649190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.649597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.649628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.650011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.650039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.650322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.650350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.650756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.650784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.651167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.651196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.651597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.651627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.652020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.652049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.652431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.652460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.652862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.652891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.653288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.653319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.653701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.653730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.654127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.654156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.654459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.654490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.654876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.654906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.655171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.655203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.655622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.655652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.655997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.656026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.656430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.656459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.656744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.656772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.657161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.657190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.657620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.657650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.658044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.658073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.658465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.658496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.658895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.658924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.659327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.659357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.659740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.659768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.660034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.660067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.660460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.660490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.660859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.661284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.661313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.661707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.661735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.662071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.662099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.662468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.662498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.662890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.662919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.663302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.663332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.663725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.663754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.664155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.664184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.664522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.664552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.664930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.664958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.665328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.665357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-16 00:41:21.665784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-16 00:41:21.665812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.666200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.666239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.666627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.666656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.667055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.667083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.667455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.667486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.667877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.667905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.668302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.668332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.668727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.668757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.669136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.669164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.669545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.669576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.669967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.669997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.670349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.670379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.670667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.670698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.671079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.671110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.671546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.671576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.671970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.671999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.672382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.672411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.672793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.672822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.673193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.673222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.673632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.673660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.674045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.674073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.674460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.674490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.674765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.674797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.675178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.675207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.675593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.675624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.676004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.676033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.676429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.676466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.676811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.676841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.677226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.677265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.677672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.677702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.678100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.678129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.678485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.678514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.678894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.678924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.679311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.679341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.679738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.679766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.680133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.680162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.680538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.680568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.680965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.680994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.681358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.681386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.681781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.681810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.503 [2024-07-16 00:41:21.682192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.503 [2024-07-16 00:41:21.682221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.503 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.682608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.682637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.683035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.683063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.683457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.683487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.683912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.683941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.684325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.684357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.684750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.684779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.685111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.685140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.685543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.685573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.686036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.686065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.686463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.686494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.686897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.686925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.687309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.687338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.687734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.687764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.688156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.688185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.688609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.688639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.689022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.689050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.689479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.689509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.689912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.689940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.690375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.690405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.690795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.690824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.691144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.691173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.691582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.691611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.691874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.691903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.692282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.692314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.692735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.692763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.693169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.693203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.693612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.693643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.694028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.694056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.694443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.694472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.694830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.694860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.504 qpair failed and we were unable to recover it. 00:30:08.504 [2024-07-16 00:41:21.695288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.504 [2024-07-16 00:41:21.695318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.695685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.695714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.695972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.696003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.696353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.696382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.696776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.696805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.697193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.697222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.697617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.697646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.698058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.698086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.698384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.698425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.698831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.698860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.699250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.699280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.699692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.699721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.700120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.700149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.700531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.700563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.700948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.700977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.701329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.701359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.701762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.701790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.702186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.702215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.702605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.702635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.703039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.703068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.703477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.703506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.703890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.703919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.704321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.704352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.704743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.704774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.705172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.705201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.705579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.705610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.705996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.706025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.706420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.706450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.706846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.706875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.707282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.707313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.707698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.707726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.708121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.708149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.708533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.708565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.708950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.708979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.709323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.709353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.709750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.709778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.710116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.710146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.710400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.710432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.710810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.710839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.711246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.711276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.711700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.505 [2024-07-16 00:41:21.711729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.505 qpair failed and we were unable to recover it. 00:30:08.505 [2024-07-16 00:41:21.712117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.712145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.712532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.712563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.712846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.712876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.713249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.713279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.713684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.713713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.714100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.714128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.714528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.714558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.714838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.714869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.715256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.715286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.715669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.715698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.716093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.716123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.716533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.716563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.716943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.716974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.717366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.717396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.717798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.717827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.718222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.718261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.718669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.718698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.719082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.719111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.719504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.719534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.719929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.719958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.720346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.720375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.720841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.720875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.721270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.721299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.721693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.721722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.722109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.722138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.722534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.722564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.722978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.723007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.723391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.723421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.723805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.723834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.724217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.724263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.724689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.724718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.725116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.725144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.725543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.725574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.725957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.725986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.726392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.726422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.726821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.726851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.727244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.727274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.727685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.727713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.728109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.728138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.728516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.728546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.728933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.506 [2024-07-16 00:41:21.728962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.506 qpair failed and we were unable to recover it. 00:30:08.506 [2024-07-16 00:41:21.729344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.729374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.729641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.729674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.729943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.729975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.730366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.730397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.730780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.730810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.731215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.731253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.731650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.731678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.732067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.732096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.732390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.732421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.732819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.732848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.733260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.733291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.733673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.733702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.734089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.734119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.734496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.734526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.734921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.734949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.735336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.735366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.735778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.735807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.736206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.736243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.736601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.736631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.737016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.737046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.737430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.737466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.737865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.737894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.738291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.738321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.738707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.738736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.739125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.739153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.739528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.739557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.739956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.739985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.740357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.740387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.740774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.740803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.741197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.741226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.741642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.741672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.742057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.742086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.742456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.742487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.742881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.742911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.743301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.743331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.743715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.743744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.744028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.744059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.744456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.744485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.744885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.744913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.745295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.745325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.507 [2024-07-16 00:41:21.745710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.507 [2024-07-16 00:41:21.745739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.507 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.746133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.746562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.746592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.746979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.747008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.747398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.747427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.747826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.747854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.748265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.748295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.748679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.748710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.749093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.749122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.749499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.749530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.749870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.749900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.750289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.750319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.750590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.750621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.750998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.751027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.751402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.751431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.751734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.751763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.752192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.752222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.752643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.752672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.753010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.753040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.753443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.753473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.753868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.753903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.754298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.754329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.754712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.754740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.755169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.755198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.755585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.755615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.756059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.756088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.756374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.756403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.756774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.756802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.757125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.757156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.757535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.757565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.757961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.757989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.758332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.758362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.758763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.759152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.759181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.759585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.508 [2024-07-16 00:41:21.759616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.508 qpair failed and we were unable to recover it. 00:30:08.508 [2024-07-16 00:41:21.760020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.760049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.760443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.760474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.760873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.760902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.761202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.761239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.761611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.761641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.762045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.762074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.762482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.762511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.762916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.762945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.763308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.763338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.763622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.763650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.764026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.764055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.764445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.764475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.764869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.764898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.765296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.765326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.765691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.765719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.766120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.766150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.766544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.766574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.766960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.766989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.767396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.767425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.767796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.767825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.768215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.768258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.768687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.768716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.769090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.769119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.769509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.769539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.769932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.769961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.770351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.770391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.770742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.770771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.771125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.771155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.771539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.771568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.771954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.771983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.772390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.772421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.772708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.772740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.773125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.773154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.773426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.773456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.773838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.773867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.774135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.774165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.774534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.774565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.774958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.774988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.775386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.775416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.775711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.509 [2024-07-16 00:41:21.775742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.509 qpair failed and we were unable to recover it. 00:30:08.509 [2024-07-16 00:41:21.776145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.776174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.776561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.776592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.776989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.777019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.777405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.777435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.777796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.777825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.778256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.778285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.778695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.778724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.779129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.779157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.779406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.779435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.779728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.779756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.780130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.780159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.780561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.780592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.781055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.781084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.781535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.781565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.781981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.782010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.782413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.782443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.782745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.782774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.783161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.783190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.783599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.783630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.783915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.783943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.784340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.784370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.784656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.784687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.785098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.785129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.785553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.785584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.785979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.786009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.786439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.786474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.786731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.786762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.787126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.787155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.787423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.787453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.787915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.787944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.788335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.788366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.788614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.788644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.789050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.789080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.789460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.789490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.789800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.789831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.790290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.790320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.790729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.790758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.791112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.791142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.791418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.791450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.791824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.791853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.792249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.792280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.510 qpair failed and we were unable to recover it. 00:30:08.510 [2024-07-16 00:41:21.792664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.510 [2024-07-16 00:41:21.792693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.793033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.793062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.793312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.793342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.793635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.793666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.794053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.794083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.794248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.794277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.794569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.794599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.795002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.795032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.795475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.795505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.795904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.795933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.796327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.796356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.796769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.796799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.797243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.797275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.797699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.797728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.798105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.798135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.798508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.798539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.798927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.798958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.799343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.799373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.799783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.799812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.800152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.800182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.800573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.800604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.800986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.801015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.801457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.801488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.801897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.801927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.802361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.802396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.802682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.802711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.802983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.803012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.803278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.803307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.803693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.803723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.804171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.804201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.804644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.804675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.804965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.804996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.805275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.805306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.805468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.805494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.805920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.805950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.806357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.806388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.806777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.806806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.807078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.807107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.807499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.807531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.807933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.807962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.808342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.511 [2024-07-16 00:41:21.808374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.511 qpair failed and we were unable to recover it. 00:30:08.511 [2024-07-16 00:41:21.808800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.808829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.809179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.809208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.809518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.809548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.809935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.809964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.810127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.810158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.810529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.810558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.810943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.810971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.811355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.811386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.811803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.811831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.812185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.812215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.812674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.812704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.813095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.813124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.813417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.813446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.813853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.813883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.814274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.814305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.814701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.814731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.815093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.815122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.815520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.815550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.815950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.815980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.816367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.816397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.816617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.816646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.817053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.817082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.817427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.817458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.817873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.817907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.818300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.818330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.818730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.818760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.818972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.819001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.819398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.819428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.819822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.819852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.820104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.820132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.820504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.820535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.820921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.820952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.821278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.821309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.821738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.821768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.822054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.822086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.822445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.822477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.512 qpair failed and we were unable to recover it. 00:30:08.512 [2024-07-16 00:41:21.822882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.512 [2024-07-16 00:41:21.822911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.823203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.823241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.823620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.823649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.824050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.824080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.824332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.824361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.824758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.824787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.825160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.825189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.825585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.825615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.826018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.826047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.826180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.826207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.826611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.826641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.827029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.827060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.827446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.827477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.827844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.827874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.828265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.828296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.828680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.828710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.829121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.829151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.829563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.829593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.829980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.830010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.830397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.830428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.830820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.830850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.831248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.831279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.831730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.831760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.832189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.832218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.832616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.832647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.833043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.833074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.833468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.833499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.833788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.833825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.834222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.834261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.834555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.834583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.834975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.835004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.835400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.835431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.835836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.835866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.836263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.836294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.836685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.836715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.837109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.837139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.837519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.837550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.837936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.837967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.513 qpair failed and we were unable to recover it. 00:30:08.513 [2024-07-16 00:41:21.838351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.513 [2024-07-16 00:41:21.838382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.838769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.838798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.839200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.839239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.839591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.839621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.839999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.840028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.840340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.840371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.840800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.840831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.841245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.841276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.841673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.841703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.842093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.842123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.842506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.842945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.842974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.843259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.843288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.843698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.843729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.844132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.844163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.844535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.844566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.844961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.844992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.845461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.845492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.845893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.845923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.846325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.846357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.846744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.846774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.847154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.847184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.847601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.847632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.848068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.848098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.848503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.848534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.848877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.848908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.849321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.849352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.849756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.849786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.850180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.850210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.850611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.850651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.851045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.851075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.851527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.851557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.851948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.851978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.852368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.852399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.852840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.852870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.853278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.853308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.853698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.853727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.854073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.854103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.854508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.854539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.854941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.854970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.855353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.855384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.514 [2024-07-16 00:41:21.855757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.514 [2024-07-16 00:41:21.855787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.514 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.856139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.856169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.856569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.856600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.856997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.857027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.857419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.857449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.857853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.857887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.858278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.858308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.858697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.858727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.859114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.859144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.859440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.859468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.859875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.859904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.860261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.860294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.860669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.860699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.861102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.861133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.861423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.861453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.861856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.861887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.862293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.862325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.862728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.862757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.863176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.863206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.863608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.863638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.864009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.864039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.864442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.864474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.864758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.864786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.865183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.865213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.865612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.865643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.866041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.866071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.866481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.866511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.866903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.866932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.867296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.867333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.867738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.867768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.868168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.868197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.868620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.868651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.869046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.869075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.869477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.869508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.869928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.869958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.870355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.870386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.870832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.870862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.871274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.871306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.871731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.871762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.872143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.872172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.872566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.515 [2024-07-16 00:41:21.872597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.515 qpair failed and we were unable to recover it. 00:30:08.515 [2024-07-16 00:41:21.872993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.873297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.873329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.873752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.873781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.874238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.874269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.874699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.874729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.875132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.875162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.875581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.875611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.876002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.876033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.876530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.876626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.877085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.877122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.877396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.877431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.877828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.877858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.878264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.878296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.878637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.878669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.879076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.879108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.879530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.879562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.879961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.879992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.880393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.880425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.880824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.880853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.881250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.881282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.881644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.881674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.882077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.882107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.882515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.882546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.882937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.882967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.883375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.883406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.883805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.883837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.884228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.884282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.884678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.884714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.885060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.885091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.885478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.885509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.885978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.886008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.886404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.886436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.886869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.886899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.887310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.887341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.887742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.887772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.888050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.888079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.888494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.888525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.888896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.888927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.889325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.889356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.889634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.889662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.516 [2024-07-16 00:41:21.890084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.516 [2024-07-16 00:41:21.890114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.516 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.890503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.890534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.890805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.890835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.891225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.891265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.891700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.891730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.892133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.892165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.892607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.892639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.893038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.893069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.893474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.517 [2024-07-16 00:41:21.893505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.517 qpair failed and we were unable to recover it. 00:30:08.517 [2024-07-16 00:41:21.893911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.893941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.894336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.894368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.894779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.894809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.895222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.895261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.895657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.895688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.896091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.896122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.896536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.896567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.896970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.896999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.897369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.897399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.897815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.897845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.898250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.898281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.898709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.898739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.899145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.899174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.899550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.899581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.899970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.900001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.900402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.900434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.900793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.900823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.901223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.901264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.901654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.901690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.902093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.902122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.902398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.902429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.902832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.902861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.903239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.903270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.903674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.903703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.904119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.904149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.904551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.904582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.904998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.905029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.905442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.905472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.905908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.905938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.906338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.906368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.906785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.906815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.518 [2024-07-16 00:41:21.907238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.518 [2024-07-16 00:41:21.907270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.518 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.907712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.907741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.908138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.908168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.908567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.908598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.909038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.909068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.909573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.909669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.910141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.910178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.910604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.910638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.911039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.911069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.911453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.911486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.911882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.911912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.912313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.912345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.912754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.912783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.913185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.913215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.913691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.913723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.914116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.914147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.914553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.914584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.914966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.914997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.915407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.915439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.915827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.915857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.916268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.916299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.916687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.916716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.917159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.917189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.917574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.917606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.918021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.918051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.918444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.918476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.918870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.918902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.919195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.919226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.919541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.919577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.919983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.920014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.920413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.920446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.920838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.920868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.921254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.921285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.921682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.921713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.922109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.922140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.922542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.922573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.922987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.923018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.923406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.923437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.923840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.923870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.924271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.924303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.924647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.924677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.519 qpair failed and we were unable to recover it. 00:30:08.519 [2024-07-16 00:41:21.925082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.519 [2024-07-16 00:41:21.925112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.925484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.925515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.925916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.925948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.926366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.926398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.926804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.926836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.927119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.927154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.927458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.927491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.927904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.927934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.928353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.928385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.928795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.928825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.929109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.929141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.929539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.929570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.929980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.930010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.930412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.930449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.930727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.930757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.931163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.931192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.931490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.931524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.931919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.931949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.932337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.932370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.932794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.932825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.933248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.933278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.933683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.933713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.933994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.934023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.934438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.934468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.934843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.934874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.935268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.935319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.935743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.935773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.936187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.936217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.936585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.936615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.937020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.937051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.937448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.937480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.937856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.937887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.938296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.938328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.938715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.938746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.939145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.939176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.939585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.939619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.939991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.940021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.940420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.940451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.940858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.940888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.941311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.941341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.941701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.941733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.520 qpair failed and we were unable to recover it. 00:30:08.520 [2024-07-16 00:41:21.942117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.520 [2024-07-16 00:41:21.942147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.942552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.942584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.942967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.942999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.943320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.943351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.943751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.943782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.944172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.944203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.944620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.944651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.945021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.945052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.945460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.945491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.945892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.945924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.946217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.946255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.946666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.946696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.947103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.947139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.947508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.947540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.947958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.947988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.948411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.948441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.948845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.948875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.949280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.949312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.949724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.949754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.950166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.950198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.950674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.950707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.951105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.951136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.951493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.951524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.951953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.951985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.952401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.952434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.952835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.952865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.953293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.953327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.953727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.953757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.954163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.954192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.954576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.954609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.955035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.955065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.955477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.955509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.955912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.955942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.956346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.956380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.956820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.956851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.957271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.957302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.957709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.957739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.958087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.958119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.958515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.958546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.958967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.958998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.959379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.521 [2024-07-16 00:41:21.959410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.521 qpair failed and we were unable to recover it. 00:30:08.521 [2024-07-16 00:41:21.959802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.959833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.960268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.960301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.960740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.960772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.961145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.961175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.961552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.961590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.961901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.961932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.962351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.962382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.962795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.962825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.963219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.963261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.963711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.963741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.964152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.964182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.964474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.964515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.964910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.964941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.965354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.965385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.965814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.965845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.966250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.966281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.966730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.966761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.967182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.967212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.967636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.967668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.968065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.968096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.968517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.968548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.968829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.968863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.969282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.969315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.969578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.969611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.970076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.970106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.970498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.970531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.970781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.970813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.971206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.971245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.971667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.971698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.972109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.972139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.972539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.972571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.972972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.973001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.973402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.973433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.522 [2024-07-16 00:41:21.973858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.522 [2024-07-16 00:41:21.973889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.522 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.974304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.974336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.974750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.974780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.975200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.975242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.975644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.975675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.975969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.975998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.976409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.976440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.976844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.976874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.977290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.977320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.977753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.977784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.978201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.978242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.978655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.978685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.979130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.979160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.979597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.979630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.980033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.980064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.980412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.980443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.980736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.980768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.981181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.981212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.981635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.981671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.982032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.982063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.982476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.982507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.982986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.983016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.983435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.983466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.983867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.983898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.984311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.984344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.984756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.984788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.985153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.985183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.985600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.985631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.986043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.986074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.986485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.986516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.986920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.986951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.987354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.987386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.987791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.987824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.988219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.988264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.988689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.988720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.988993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.989024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.989443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.989476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.989919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.989949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.990358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.990390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.990809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.990841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.991281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.991313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.991607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.523 [2024-07-16 00:41:21.991643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.523 qpair failed and we were unable to recover it. 00:30:08.523 [2024-07-16 00:41:21.992038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.992070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.992467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.992499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.992908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.992939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.993357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.993390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.993818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.993850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.994253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.994285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.994716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.994746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.995119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.995149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.995552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.995584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.995986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.996016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.996405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.996436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.996839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.996869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.997268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.997300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.997701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.997730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.998150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.998181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.998643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.998676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.999082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.999118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.999525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.999557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:21.999965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:21.999995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.000403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.000434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.000834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.000864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.001267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.001299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.001612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.001644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.001939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.001969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.002373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.002405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.002801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.002831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.003254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.003287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.003735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.003766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.004214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.004258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.004660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.004689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.005114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.005145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.005546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.005579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.005987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.006019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.006429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.006463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.006762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.006790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.007092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.007124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.007444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.007477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.007919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.007949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.008380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.008411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.008715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.008747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.524 [2024-07-16 00:41:22.008986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.524 [2024-07-16 00:41:22.009017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.524 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.009434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.009465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.009840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.009871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.010254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.010285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.010718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.010748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.011056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.011086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.011395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.011425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.011804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.011834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.012248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.012278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.012670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.012702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.013078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.013108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.013517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.013549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.013956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.013986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.014409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.014441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.014894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.014925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.015390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.015421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.015850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.015887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.016292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.016324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.016771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.016802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.017258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.017291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.017601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.017635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.018052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.018084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.018526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.018558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.018848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.018878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.019299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.019331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.019755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.019784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.020209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.020250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.020545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.020575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.020858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.020887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.021293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.021328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.021757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.021789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.022198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.022254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.022643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.022677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.023087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.023119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.023524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.023555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.023849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.023879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.024163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.024196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.024595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.024627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.025040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.025073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.025413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.025446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.025862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.525 [2024-07-16 00:41:22.025892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.525 qpair failed and we were unable to recover it. 00:30:08.525 [2024-07-16 00:41:22.026340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.026373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.026782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.026814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.027240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.027272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.027577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.027611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.028011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.028041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.028428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.028460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.028882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.028914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.029199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.029244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.029656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.029687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.030105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.030136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.030494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.030529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.030931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.030961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.031325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.031357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.031663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.031693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.032112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.032142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.032569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.032608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.033052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.033083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.033469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.033501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.033917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.033949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.034357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.034390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.034668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.034697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.035119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.035152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.035564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.035595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.035982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.036014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.036416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.036447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.036878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.036908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.037366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.037398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.037689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.037718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.038032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.038063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.038461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.038494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.038911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.038942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.039203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.039245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.039692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.039725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.040066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.526 [2024-07-16 00:41:22.040096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.526 qpair failed and we were unable to recover it. 00:30:08.526 [2024-07-16 00:41:22.040515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.040548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.040921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.040952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.041209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.041250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.041643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.041674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.042075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.042106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.042402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.042434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.042839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.042870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.043144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.043175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.043608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.043640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.044039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.044070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.044458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.044489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.044942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.044974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.045220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.045261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.045659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.045691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.046068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.046101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.046512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.046546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.046947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.046979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.047398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.047432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.047804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.047837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.048181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.048211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.048623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.048654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.048924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.048958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.049367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.049398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.049759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.049790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.050213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.050280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.050695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.050725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.051133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.051166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.051580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.051613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.052030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.052061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.052462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.052493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.052689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.052717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.053110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.053142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.053523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.053555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.053822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.053853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.054256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.054287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.054716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.054746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.055167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.055199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.527 qpair failed and we were unable to recover it. 00:30:08.527 [2024-07-16 00:41:22.055613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.527 [2024-07-16 00:41:22.055645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.056144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.056174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.056569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.056601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.057010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.057042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.057450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.057482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.057877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.057908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.058285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.058318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.058772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.058803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.059204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.059245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.059661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.059693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.060057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.060090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.060509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.060541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.060839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.060868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.061266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.061297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.061579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.061611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.062027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.062057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.062467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.062498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.062898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.062928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.063345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.063383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.063702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.063732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.064134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.064165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.064605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.064637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.065049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.065079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.065479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.065510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.065919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.065969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.066357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.066389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.066835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.066866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.067280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.067311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.067709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.067740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.068178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.068208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.068613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.068644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.069026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.069056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.069444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.069475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.069867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.069898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.070311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.070344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.070756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.070786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.071199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.071228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.071514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.071547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.071993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.072023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.072438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.072470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.072879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.528 [2024-07-16 00:41:22.072912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.528 qpair failed and we were unable to recover it. 00:30:08.528 [2024-07-16 00:41:22.073311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.073344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.073647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.073679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.074051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.074080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.074516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.074547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.074948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.074980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.075390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.075421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.075839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.075871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.076278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.076309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.076709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.076739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.077149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.077180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.077601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.077633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.078024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.078054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.078448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.078480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.078855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.078885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.079267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.079300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.079700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.079731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.080126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.080156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.080539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.080570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.080985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.081015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.081420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.081451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.081854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.081885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.082302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.082333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.082734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.082764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.083172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.083211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.083528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.083562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.084002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.084032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.084442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.084473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.084876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.084906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.085304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.085335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.085770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.085801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.086078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.086108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.086508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.086539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.086770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.086803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.087261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.087294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.087700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.087732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.088136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.088167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.088590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.088621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.089046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.089078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.089497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.089529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.529 [2024-07-16 00:41:22.089930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.529 [2024-07-16 00:41:22.089960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.529 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.090361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.090392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.090800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.090830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.091251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.091284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.091690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.091720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.092127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.092159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.092543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.092575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.092997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.093028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.093438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.093469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.093868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.093899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.094311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.094343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.094636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.094668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.095111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.095141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.095550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.095583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.095997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.096030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.096435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.096466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.096858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.096890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.097289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.097322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.097740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.097771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.098132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.098162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.098485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.098514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.098927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.098958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.099378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.099411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.099774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.099806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.100203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.100250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.100708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.100738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.101035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.101066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.101446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.101478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-07-16 00:41:22.101778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.530 [2024-07-16 00:41:22.101808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.102215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.102261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.102712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.102743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.102934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.102966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.103392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.103423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.103821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.103852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.104222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.104265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.104677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.104710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.105177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.105207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.105655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.105686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.106073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.106103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.106515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.106547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.106969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.106998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.107410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.107441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.107859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.107889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.108290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.108322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.108744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.108775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.109173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.109203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.109591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.109621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.109996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.110026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.110435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.110465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.110867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.110898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.111294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.111327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.111656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.111687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.112093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.112124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.112518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.112550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.112968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.112999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.113410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.113441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.113825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.113855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.114267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.114300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.114687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.114718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.115194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.115224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.115672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.115703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.116014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.116048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.116435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.116466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.116742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.116773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.117198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-16 00:41:22.117244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-16 00:41:22.117669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.117700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.118095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.118125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.118519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.118551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.119026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.119057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.119474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.119505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.119812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.119844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.120224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.120266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.120646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.120675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.121101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.121131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.121520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.121551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.122022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.122052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.122483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.122514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.122816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.122847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.123271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.123303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.123574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.123602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.123984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.124014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.124419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.124451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.124878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.124908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.125341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.125372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.125669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.125701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.126161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.126191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.126604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.126635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.127021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.127051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.127392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.127424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.127723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.127762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.128178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.128209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.128686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.128719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.129127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.129157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.129561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.129593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.129866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.129897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.130202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.130247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.130762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.130793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.131157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.131187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.131586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.131619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.131924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.131954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.132391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.132422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.132863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.132894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.133163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.133193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.133694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.133727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.134103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.134133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.134513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-16 00:41:22.134545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-16 00:41:22.134936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.134967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.135430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.135462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.135865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.135896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.136302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.136335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.136776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.136807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.137190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.137221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.137626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.137657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.138053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.138083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.138467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.138497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.138907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.138936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.139339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.139371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.139829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.139859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.140246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.140278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.140724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.140755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.141035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.141065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.141393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.141428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.141859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.141888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.142261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.142292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.142743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.142774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.143161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.143192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.143521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.143552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.143967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.143996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.144403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.144435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.144879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.144909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.145323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.145353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.145791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.145829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.146239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.146271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.146683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.146713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.147133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.147163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.147603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.147634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.148035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.148065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.148371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.148406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.148816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.148846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.149263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.149296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.149696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.149726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.150123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.150154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.150549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.150581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.150995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.151030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.151401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.151432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.151831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.151862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.806 [2024-07-16 00:41:22.152273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.806 [2024-07-16 00:41:22.152304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.806 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.152730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.152761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.153205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.153271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.153559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.153588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.154005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.154035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.154343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.154378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.154785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.154816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.155221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.155263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.155712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.155742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.156157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.156187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.156445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.156476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.156837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.156866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.157288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.157321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.157620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.157648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.158103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.158133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.158531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.158563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.158953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.158984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.159279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.159312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.159604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.159634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.160043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.160074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.160499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.160530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.160939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.160969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.161466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.161497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.161888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.161919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.162337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.162367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.162801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.162838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.163126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.163160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.163594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.163626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.164061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.164090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.165964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.166030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.166446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.166483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.166779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.166814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.167222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.167268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.167469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.167504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.167912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.167942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.168349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.168380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.168816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.168847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.169271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.169302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.169699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.169730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.170136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.170165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.170577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.170609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.807 [2024-07-16 00:41:22.170889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.807 [2024-07-16 00:41:22.170921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.807 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.171326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.171356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.171765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.171795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.172215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.172256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.172680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.172710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.173118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.173149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.173546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.173578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.173996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.174027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.174453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.174485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.174885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.174917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.175322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.175353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.175799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.175829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.176250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.176282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.176700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.176732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.177144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.177175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.177590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.177623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.178060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.178090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.178510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.178541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.178919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.178949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.179365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.179397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.179770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.179800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.180211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.180251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.180654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.180685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.181097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.181128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.181523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.181560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.181991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.182022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.182424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.182455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.182867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.182897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.183314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.183345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.183723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.183752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.184154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.184185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.184604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.184636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.185044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.185074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.185364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.808 [2024-07-16 00:41:22.185395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.808 qpair failed and we were unable to recover it. 00:30:08.808 [2024-07-16 00:41:22.185820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.185850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.186268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.186301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.186713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.186744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.187090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.187121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.187532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.187563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.187845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.187877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.188311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.188345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.188775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.188806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.189202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.189243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.189703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.189733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.190153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.190183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.190596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.190629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.191022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.191054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.191464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.191499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.191947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.191978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.192380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.192412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.192811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.192843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.193259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.193290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.193730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.193761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.194162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.194191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.194620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.194651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.195046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.195077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.196967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.197031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.197423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.197459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.199732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.199797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.200280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.200319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.200736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.200769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.201175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.201205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.201720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.201751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.202162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.202194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.202630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.202670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.203072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.203103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.203514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.203547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.203924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.203956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.204250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.204282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.204685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.204716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.206498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.206559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.206990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.207026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.207461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.207493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.207896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.207926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.809 [2024-07-16 00:41:22.208319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.809 [2024-07-16 00:41:22.208351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.809 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.208768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.208798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.209217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.209260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.209675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.209707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.210106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.210138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.210510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.210544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.210920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.210951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.211350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.211381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.211789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.211820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.212244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.212275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.213948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.214003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.214354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.214393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.214721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.214752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.215166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.215197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.215621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.215652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.216051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.216081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.216467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.216498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.216797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.216840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.217278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.217328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.217763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.217816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.218188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.218256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.218715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.218765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.219210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.219318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.219777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.219808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.220212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.220260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.220677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.220709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.221105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.221137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.221423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.221457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.221828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.221860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.222127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.222160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.222507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.222547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.222841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.222872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.223292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.223704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.223738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.224144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.224175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.224518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.224550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.224949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.224980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.225383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.225414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.225641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.225671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.226062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.226091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.226488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.226519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.810 qpair failed and we were unable to recover it. 00:30:08.810 [2024-07-16 00:41:22.226943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.810 [2024-07-16 00:41:22.226973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.227295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.227327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.227760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.227793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.228082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.228116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.228531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.228567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.228971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.229003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.229515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.229547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.229818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.229849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.230269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.230302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.230726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.230757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.231060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.231088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.231456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.231488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.231899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.231930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.232198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.232228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.232668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.232698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.233119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.233151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.233404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.233435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.233917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.233948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.234268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.234300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.234707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.234737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.235121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.235152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.235576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.235607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.236002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.236033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.236257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.236287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.236706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.236735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.237149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.237180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.237640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.237675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.238076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.238106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.238509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.238541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.238963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.239000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.239422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.239455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.239866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.239897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.240245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.240278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.240558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.240589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.240985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.241016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.241382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.241416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.241835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.241865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.242337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.242368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.242642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.242674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.243069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.243100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.243495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.811 [2024-07-16 00:41:22.243527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.811 qpair failed and we were unable to recover it. 00:30:08.811 [2024-07-16 00:41:22.243943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.243974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.244361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.244396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.244739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.244772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.245221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.245283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.245717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.245747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.246206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.246250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.246691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.246721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.247115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.247146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.247552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.247583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.247871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.247899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.248289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.248323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.248731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.248761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.249147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.249176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.249580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.249612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.249981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.250012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.250398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.250429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.250744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.250775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.251072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.251105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.251519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.251550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.251898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.251930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.252344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.252375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.252729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.252760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.253173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.253204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.253540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.253573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.253986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.254016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.254418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.254449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.254852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.254882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.255298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.255329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.255725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.255762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.256174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.256204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.256608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.256639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.257066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.257096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.257548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.257580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.257975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.258006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.258411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.812 [2024-07-16 00:41:22.258443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.812 qpair failed and we were unable to recover it. 00:30:08.812 [2024-07-16 00:41:22.258708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.258739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.259158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.259187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.259599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.259630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.260040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.260071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.260490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.260520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.260991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.261021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.261328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.261359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.261782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.261812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.262246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.262279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.262774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.262804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.263102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.263130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.263571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.263602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.263916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.263950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.264373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.264405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.264831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.264861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.265171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.265203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.265536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.265568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.266012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.266043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.266326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.266356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.266783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.266813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.267087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.267120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.267597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.267627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.267889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.267919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.268250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.268282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.268652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.268684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.268939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.268970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.269327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.269358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.269622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.269650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.270003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.270033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.270427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.270457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.270786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.270817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.271094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.271124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.271459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.271492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.271921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.271956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.272252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.272282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.272662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.272693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.273083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.273113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.273331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.273361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.273751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.273781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.274160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.274192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.274621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.813 [2024-07-16 00:41:22.274653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.813 qpair failed and we were unable to recover it. 00:30:08.813 [2024-07-16 00:41:22.275050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.275081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.275374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.275409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.275830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.275860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.276260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.276291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.276730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.276761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.277172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.277202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.277522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.277554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.277964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.277995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.278473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.278504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.278784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.278816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.279277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.279309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.279600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.279630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.279936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.279966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.280206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.280250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.280688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.280718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.281122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.281151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.281389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.281419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.281848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.281879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.282136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.282165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.282640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.282672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.282928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.282958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.283309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.283340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.283667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.283700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.283995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.284025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.284490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.284521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.284860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.284892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.285084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.285119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.285540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.285572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.285957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.285987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.286323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.286353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.286631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.286662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.286896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.286927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.287379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.287416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.287836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.287866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.288182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.288211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.288634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.288664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.288959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.288987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.289256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.289288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.289690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.289720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.289983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.290013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.290414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.814 [2024-07-16 00:41:22.290444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.814 qpair failed and we were unable to recover it. 00:30:08.814 [2024-07-16 00:41:22.290864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.290894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.291365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.291397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.291683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.291712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.291984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.292016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.292329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.292361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.292797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.292828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.293266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.293298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.293714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.293744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.294043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.294072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.294351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.294382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.294815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.294845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.295116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.295144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.295611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.295643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.295932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.295961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.296414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.296445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.296868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.296898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.297258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.297291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.297708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.297740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.298245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.298278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.298608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.298639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.298905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.298935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.299370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.299401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.299811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.299842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.300218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.300258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.300601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.300632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.301043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.301072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.301521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.301553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.302036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.302066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.302505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.302537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.302947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.302977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.303380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.303411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.303868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.303904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.304315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.304349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.304742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.304774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.305188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.305220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.305612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.305645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.305932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.305963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.306364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.306397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.306748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.306778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.307193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.307225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.307627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.815 [2024-07-16 00:41:22.307657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.815 qpair failed and we were unable to recover it. 00:30:08.815 [2024-07-16 00:41:22.307948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.307980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.308383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.308415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.308838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.308869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.309250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.309280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.309680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.309707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.310120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.310151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.310571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.310602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.310999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.311030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.311457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.311488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.311890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.311921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.312342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.312375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.312809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.312841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.313252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.313284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.313717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.313747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.314137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.314169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.314503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.314534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.314928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.314962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.315260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.315293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.315732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.315764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.316168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.316199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.316529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.316561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.316863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.316894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.317322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.317355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.317644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.317675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.318085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.318115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.318588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.318622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.319039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.319070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.319397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.319429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.319738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.319769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.319935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.319968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.320351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.320389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.320769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.320802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.321205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.321259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.321618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.321652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.816 [2024-07-16 00:41:22.322036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.816 [2024-07-16 00:41:22.322067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.816 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.322544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.322577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.322973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.323005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.323300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.323334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.323762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.323795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.324193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.324224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.324671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.324702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.325090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.325121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.325440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.325472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.325896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.325927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.326147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.326178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.326718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.326750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.327159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.327190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.327467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.327500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.327889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.327921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.328304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.328336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.328643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.328675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.329050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.329080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.329484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.329515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.329933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.329962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.330350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.330381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.330797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.330828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.331244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.331276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.331451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.331483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.331925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.331956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.332365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.332396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.332851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.332881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.333328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.333359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.333681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.333712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.334106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.334135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.334543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.334575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.334878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.334909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.335339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.335371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.335783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.335813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.336113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.336144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.336476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.336506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.336813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.336852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.337094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.337126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.337537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.337569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.337938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.337968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.338369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.817 [2024-07-16 00:41:22.338399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.817 qpair failed and we were unable to recover it. 00:30:08.817 [2024-07-16 00:41:22.338822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.338853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.339255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.339287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.339736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.339767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.340058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.340091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.340562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.340594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.340999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.341030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.341327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.341357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.341799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.341829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.342119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.342150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.342577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.342609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.343022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.343053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.343353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.343383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.343816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.343846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.344223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.344266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.344587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.344620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.344910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.344940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.345358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.345388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.345681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.345709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.346130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.346160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.346583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.346614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.347018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.347048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.347459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.347493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.347812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.347844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.348279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.348311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.348732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.348762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.349164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.349194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.349609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.349640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.350009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.350039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.350341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.350370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.350780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.350811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.351275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.351306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.351755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.351787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.352252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.352284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.352616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.352646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.352997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.353027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.353422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.353453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.353851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.353882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.354301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.354332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.354744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.354775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.355200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.355271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.355712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.818 [2024-07-16 00:41:22.355742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.818 qpair failed and we were unable to recover it. 00:30:08.818 [2024-07-16 00:41:22.356141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.356173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.356587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.356619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.357038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.357069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.357464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.357494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.357896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.357926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.358349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.358380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.358826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.358856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.359271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.359301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.359701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.359732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.360184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.360214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.360531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.360565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.360968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.360998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.361397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.361429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.361739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.361770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.362078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.362110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.362532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.362564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.362853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.362885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.363279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.363310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.363716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.363745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.364181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.364211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.364659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.364691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.365098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.365135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.365590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.365621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.366076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.366107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.366406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.366438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.366830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.366860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.367266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.367298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.367732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.367763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.368158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.368188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.368623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.368654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.368997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.369028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.369300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.369333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.369735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.369765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.370047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.370076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.370467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.370498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.370902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.370933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.371335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.371367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.371806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.371836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.372283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.372316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.372606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.372641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.373041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.819 [2024-07-16 00:41:22.373071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.819 qpair failed and we were unable to recover it. 00:30:08.819 [2024-07-16 00:41:22.373464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.373496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.373911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.373943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.374368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.374399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.374804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.374833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.375239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.375271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.375549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.375577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.375980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.376010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.376415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.376448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.376872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.376902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.377321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.377353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.377773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.377805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.378255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.378287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.378589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.378619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.378992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.379023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.379443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.379474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.379870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.379900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.380324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.380355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.380801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.380831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.381254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.381286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.381685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.381715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.382062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.382098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.382503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.382536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.382934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.382966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.383442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.383473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.383903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.383934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.384364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.384394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.384683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.384716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.385107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.385137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.385472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.385505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.385919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.385949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.386367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.386398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.386802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.386832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.387258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.387288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.387724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.387754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.388207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.388248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.388593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.388623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.389038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.389068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.389503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.389535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.389953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.389985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.390385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.390417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.390843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.820 [2024-07-16 00:41:22.390874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.820 qpair failed and we were unable to recover it. 00:30:08.820 [2024-07-16 00:41:22.391267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.391298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.391549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.391578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.391965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.391995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.392456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.392487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.392908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.392939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.393356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.393386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.393809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.393840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.394159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.394190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.394665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.394696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.395101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.395131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.395590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.395622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.396034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.396065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.396512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.396544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.396959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.396990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.397336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.397368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.397773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.397804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.398222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.398278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.398714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.398744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.399164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.399194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.399499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.399539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.399922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.399953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.400349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.400380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.400785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.400815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.401190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.401221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.401705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.401735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.402148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.402178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.402598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.402630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.402912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.402940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.403278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.403309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.403746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.403777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.404196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.404227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.404583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.404614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.404909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.404941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.405340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.405371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.405793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.405823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.821 qpair failed and we were unable to recover it. 00:30:08.821 [2024-07-16 00:41:22.406285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.821 [2024-07-16 00:41:22.406316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.406768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.406799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.407251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.407282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.407705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.407735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.408114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.408146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.408569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.408601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.408997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.409029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.409395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.409426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.409837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.409867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.410296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.410326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.410757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.410786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.411209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.411250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.411631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.411662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.412047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.412079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.412466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.412498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.412888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.412919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.413341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.413373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.413800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.413830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.414186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.414217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.414671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.414701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.415078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.415109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.415471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.415504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.415822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.415852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.416132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.416161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.416591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.416628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.417035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.417065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.417469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.417501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.417866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.417897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.418295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.418326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.418710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.418741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.419031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.419062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.419360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.419392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.419826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.419857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.420174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.420205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.420614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.420645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.420975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.421005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.421404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.421436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.421796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.421829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.422241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.422273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.422680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.422710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.423120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.822 [2024-07-16 00:41:22.423150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-16 00:41:22.423428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.823 [2024-07-16 00:41:22.423460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:08.823 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.423793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.423825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.424262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.424296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.424724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.424755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.425051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.425082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.425560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.425591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.425958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.425989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.426357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.426388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.426825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.426856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.427252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.427284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-16 00:41:22.427569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-16 00:41:22.427600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.428025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.428055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.428495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.428527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.428937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.428967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.429400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.429432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.429876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.429905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.430281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.430313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.430771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.430802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.431193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.431225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.431635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.431666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.432046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.432076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.432527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.432558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.432958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.432989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.433331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.433368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.433675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.433703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.434030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.434059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.434359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.434389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.434777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.434807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.435243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.435274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.435589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.435620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.436011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.436040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.436429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.436459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.436889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.436920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.437335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.437365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.437769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.437800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.438213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.438267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.438665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.438696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.439112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.439142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.439536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.439567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.439935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.439967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.440366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.440398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.440823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.440853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.441261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.441292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.441673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.441704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.441992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.442023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.442338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.442369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.442686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.442719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.443134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.443164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.443579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.443610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-16 00:41:22.444007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-16 00:41:22.444037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.444333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.444363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.444766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.444795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.444948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.444979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.445364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.445395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.445748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.445780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.446044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.446076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.446520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.446552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.446999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.447029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.447471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.447502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.447917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.447947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.448342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.448373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.448873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.448903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.449309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.449340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.449604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.449641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.449822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.449855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.450275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.450307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.450733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.450764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.451201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.451242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.451530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.451560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.451976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.452006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.452354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.452385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.452766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.452797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.453213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.453254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.453635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.453665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.454051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.454081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.454480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.454511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.454943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.454973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.455269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.455317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.455739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.455769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.456053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.456085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.456377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.456409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.456844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.456875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.457365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.457396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.457671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.457703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.458122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.458154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.458434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.458465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.458879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.458911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.459185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-16 00:41:22.459219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-16 00:41:22.459588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.459619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.460017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.460048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.460462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.460493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.460824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.460855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.461313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.461344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.461647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.461678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.462072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.462102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.462526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.462557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.462832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.462865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.463286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.463318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.463768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.463798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.464134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.464166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.464525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.464557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.464962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.464993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.465287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.465318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.465736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.465772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.466194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.466224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.466672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.466703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.467068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.467098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.467414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.467447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.467849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.467878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.468293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.468324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.468738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.468768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.469145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.469174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.469553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.469586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.469984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.470015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.470416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.470446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.470842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.470872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.471271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.471303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.471737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.471768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.472171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.472199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.472675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.472707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.473125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.473155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.473450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.473481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.473898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.473928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.474338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.474369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-16 00:41:22.474635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-16 00:41:22.474663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.475067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.475097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.475523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.475554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.475974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.476007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.476419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.476449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.476849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.476878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.477291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.477324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.477747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.477778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.478192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.478223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.478732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.478763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.479158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.479188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.479589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.479620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.480034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.480064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.480466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.480498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.480916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.480946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.481255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.481286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.481715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.481745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.482143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.482172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.482554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.482587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.483010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.483045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.483352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.483384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.483801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.483832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.484219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.484263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.484715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.484745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.485057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.485086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.485388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.485420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.485771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.485801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.486227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.486269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.486550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.486582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.486983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.487013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.487408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.487440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.487849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.487879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.488299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.488330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.488764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.488795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.489197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.489227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.489486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.489515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.489941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.489971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.490370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-16 00:41:22.490403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-16 00:41:22.490808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.490839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.491211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.491253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.491422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.491456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.491858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.491888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.492298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.492329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.492750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.492781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.493060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.493093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.493520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.493551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.493980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.494011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.494470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.494501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.494918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.494951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.495268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.495300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.495739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.495769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.496069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.496101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.496295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.496329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.496776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.496807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.497217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.497259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.497672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.497702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.498166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.498196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.498628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.498659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.499057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.499087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.499584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.499620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.499907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.499940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.500337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.500370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.500784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.500815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.501204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.501245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.501714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.501744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.502145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.502175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.502574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-16 00:41:22.502607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-16 00:41:22.502794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.502823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.503244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.503276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.503691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.503721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.504127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.504157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.504499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.504530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.504800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.504831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.505185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.505215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.505660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.505690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.506058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.506087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.506483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.506515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.506893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.506924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.507334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.507367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.507651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.507684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.508092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.508123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.508538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.508570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.508965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.508995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.509365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.509395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.509815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.509844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.510246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.510280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.510737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.510767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.511111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.511142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.511518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.511550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.511951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.511982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.512419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.512451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.512835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.512866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.513257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.513289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.513697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.513728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.514006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.514038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.514429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.514462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.514756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.514787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.515145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.515176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.515620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.515653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.516075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.516111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.516424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.516456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.516939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.516971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.517314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.517345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.517742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.517772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.518162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-16 00:41:22.518194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-16 00:41:22.518606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.518966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.518998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.519306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.519338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.519641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.519673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.520078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.520110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.520611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.520644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.521071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.521102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.521520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.521554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.521980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.522012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.522322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.522354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.522769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.522801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.523066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.523097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.523582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.523612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.524042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.524074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.524343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.524376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.524768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.524806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.525148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.525178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.525598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.525627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.526035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.526063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.526411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.526441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.526857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.526886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.527276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.527307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.527738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.527767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.528201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.528244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.528658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.528688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.528986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.529015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.529315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.529344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.529655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.529684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.530084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.530113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.530499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.530528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.530812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.530840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.531223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.531265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.531667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.531695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.532087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.532115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.532413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.532448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.532872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.532900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.533309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.533339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.533769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.533798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-16 00:41:22.534192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-16 00:41:22.534221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.534566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.534595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.534990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.535018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.535403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.535432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.535861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.535889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.536337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.536367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.536778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.536806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.537220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.537262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.537543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.537574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.537994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.538022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.538374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.538404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.538618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.538645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.538910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.538940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.539391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.539421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.539839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.539867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.540269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.540299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.540726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.540753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.541027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.541055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.541397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.541425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.541718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.541746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.542151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.542180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.542632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.542661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.542925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.542953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.543443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.543475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.543902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.543930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.544335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.544365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.544801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.544830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.545279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.545309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.545759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.545787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.546077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.546104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.546432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.546461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.546819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.546847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.547168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.547198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.547605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.547635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.547937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.547964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.548254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.548284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.548594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.548629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-16 00:41:22.548926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-16 00:41:22.548955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.549362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.549392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.549669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.549699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.550082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.550111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.550558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.550587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.550893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.550920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.551225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.551267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.551738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.551768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.552051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.552084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.552517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.552548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.552805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.552833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.553253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.553282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.553736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.553764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.554204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.554243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.554537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.554569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.554951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.554979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.555400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.555429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.555726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.555754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.556044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.556076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.556419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.556447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.556741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.556769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.557051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.557088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.557511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.557542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.557946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.557974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.558253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.558282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.558683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.558711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.559006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.559034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.559492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.559521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.559913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.559941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.560360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.560388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.560773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.560802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.561183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.561212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.561644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.561673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.561972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.562001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-16 00:41:22.562426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-16 00:41:22.562456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.562752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.562779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.563160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.563189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.563612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.563641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.564036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.564064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.565913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.565978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.566414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.566447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.566814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.566843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.567257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.567287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.567706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.567734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.568192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.568221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.568632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.568660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.569071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.569099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.569515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.569544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.569970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.569998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.570396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.570427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.570825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.570854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.571298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.571328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.571746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.571775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.572200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.572242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.572640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.572670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.572962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.572988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.573408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.573437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.573861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.573890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.574301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.574331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.574792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.574821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.575192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.575220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.575627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.575655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.576065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.576094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.576518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.576547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.576939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.576968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.577161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.577191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.579704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.579780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.580270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.580307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.582210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.582284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.582738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.582768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.583162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.583191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-16 00:41:22.583594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-16 00:41:22.583627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.584027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.584056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.584538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.584569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.584980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.585008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.585432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.585461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.585895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.585924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.586325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.586356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.586778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.586808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.587215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.587256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.587707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.587736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.588151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.588179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.588577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.588606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.588927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.588954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.589375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.589404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.589704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.589733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.590143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.590171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.590579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.590608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.590988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.591016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.591361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.591390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.591813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.591842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.592286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.592318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.592526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.592558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.592868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.592898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.593313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.593343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.593742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.593770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.594069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.594098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.594493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.594522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.594932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.594960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.595359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.595389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.595830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.595858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.596250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.596281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.596684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.596712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.597154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.597182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.597634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.597664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.598061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.598090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.598455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.598490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.598787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.598818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.599241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.599271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.599704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.599731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.104 qpair failed and we were unable to recover it. 00:30:09.104 [2024-07-16 00:41:22.600141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.104 [2024-07-16 00:41:22.600170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.600568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.600602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.600944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.600972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.601371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.601402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.601794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.601822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.602221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.602263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.602657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.602686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.603088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.603118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.603543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.603572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.603994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.604024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.604445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.604475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.604842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.604871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.605313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.605344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.607705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.607773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.608278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.608316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.608745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.608775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.609263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.609294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.609710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.609738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.610136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.610163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.610568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.610599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.610993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.611021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.611413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.611443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.611862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.611891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.612289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.612319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.612724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.612754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.614550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.614609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.615085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.615118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.615429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.615462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.615747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.615776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.616178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.616206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.616602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.616632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.617028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.617056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.617331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.617364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.617738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.617767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.618200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.618227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.618517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.618548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.618981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.619018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.619382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.619413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.619829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.619857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.620183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.620210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.105 qpair failed and we were unable to recover it. 00:30:09.105 [2024-07-16 00:41:22.620609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.105 [2024-07-16 00:41:22.620638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.620939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.620967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.621386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.621416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.621834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.621863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.622281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.622313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.622735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.622765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.623058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.623085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.623509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.623538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.623921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.623948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.624264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.624293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.624702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.624731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.625145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.625173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.625599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.625629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.626046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.626074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.626459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.626488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.626896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.626925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.627320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.627349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.627782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.627811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.628227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.628280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.628556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.628588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.629053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.629082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.629483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.629512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.629919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.629947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.630348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.630379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.630651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.630682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.631082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.631110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.631357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.631386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.631793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.631821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.632240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.632269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.632699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.632726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.633047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.633074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.633458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.633489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.633882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.633910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.634264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.634292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.634691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.634719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.635132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.106 [2024-07-16 00:41:22.635160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.106 qpair failed and we were unable to recover it. 00:30:09.106 [2024-07-16 00:41:22.635567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.635602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.636013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.636041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.636430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.636460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.636733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.636763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.637162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.637190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.637658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.637687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.638090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.638118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.638503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.638534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.638803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.638832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.639252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.639282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.639675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.639703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.640064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.640091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.640504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.640534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.640923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.640951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.641344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.641374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.641780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.641807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.642250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.642279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.642679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.642708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.643147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.643175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.643559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.643589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.643987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.644016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.644437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.644467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.644855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.644884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.645296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.645324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.645732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.645762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.646066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.646098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.646505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.646535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.646956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.646984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.647355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.647384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.647806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.647834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.648227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.648268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.648743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.648771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.107 qpair failed and we were unable to recover it. 00:30:09.107 [2024-07-16 00:41:22.649170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.107 [2024-07-16 00:41:22.649197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.649653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.649682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.649970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.649998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.650392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.650422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.650831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.650859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.651279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.651309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.651731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.651760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.652120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.652148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.652531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.652567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.652979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.653007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.653395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.653424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.653825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.653853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.654271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.654300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.654732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.654761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.655173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.655203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.657088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.657144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.657629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.657660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.658062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.658090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.658474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.658504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.658923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.658951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.659361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.659392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.659825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.659853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.660276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.660306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.660711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.660739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.661131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.661159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.661553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.661583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.661954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.661982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.662313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.662341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.662752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.662781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.663187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.663214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.663698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.663727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.664139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.664167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.664572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.664603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.665006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.665035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.665498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.665526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.665937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.665966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.666358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.108 [2024-07-16 00:41:22.666388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.108 qpair failed and we were unable to recover it. 00:30:09.108 [2024-07-16 00:41:22.666693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.666721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.667155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.667183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.667626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.667656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.668107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.668135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.668564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.668594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.668944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.668972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.669359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.669389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.669810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.669838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.670244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.670273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.670654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.670682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.671095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.671124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.671568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.671604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.672022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.672050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.672475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.672509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.672785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.672815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.673257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.673287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.673659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.673687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.674112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.674140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.674551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.674581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.674988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.675016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.675410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.675440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.675849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.675877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.676291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.676321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.676741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.676768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.677163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.677191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.677666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.677695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.678101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.678130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.678366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.678394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.678782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.678810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.679211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.679250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.679667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.679696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.680102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.680131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.680517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.680546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.680965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.680993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.681391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.681421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.681828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.681855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.682178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.682205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.682598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.682627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.683040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.683068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.683375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.683406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.109 [2024-07-16 00:41:22.683829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.109 [2024-07-16 00:41:22.683858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.109 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.684214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.684261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.684683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.684713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.685123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.685151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.685454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.685483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.685775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.685806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.686214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.686253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.686637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.686665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.687100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.687128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.687524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.687554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.687950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.687980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.688369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.688405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.688714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.688747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.689176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.689204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.689672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.689701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.690082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.690112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.690502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.690531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.690928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.690955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.691352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.691382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.691749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.691777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.692168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.692197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.692624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.692653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.693075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.693103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.693376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.693405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.693826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.693854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.694250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.694280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.694686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.694715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.695122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.695151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.695543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.695572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.695926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.695954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.696377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.696406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.696837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.696866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.697263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.697292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.697720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.697749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.698163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.698191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.698489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.698521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.698931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.698961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.699361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.110 [2024-07-16 00:41:22.699391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.110 qpair failed and we were unable to recover it. 00:30:09.110 [2024-07-16 00:41:22.699768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.699798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.700176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.700203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.700635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.700663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.701148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.701175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.701563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.701593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.701998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.702026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.702467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.702497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.702923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.702951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.703358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.703386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.703810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.703839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.704246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.704276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.704682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.704710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.705124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.705152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.705554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.705588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.706014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.706043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.706479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.706508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.706821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.706850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.707262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.707299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.707739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.707768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.708188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.708216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.708599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.708628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.709022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.709050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.709573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.709672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.710217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.710271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.710690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.710719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.711138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.711166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.711573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.711604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.712001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.712032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.712552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.712651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.713151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.713188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.713654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.713686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.714085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.714114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.714547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.714578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.111 [2024-07-16 00:41:22.714869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.111 [2024-07-16 00:41:22.714897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.111 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.715314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.715347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.717377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.717922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.717957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.718336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.718367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.718783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.718811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.719249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.719278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.719696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.719726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.720114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.720142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.720603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.720633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.720925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.720953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.721360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.721391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.721716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.721752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.722040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.722072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.722466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.722497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.722909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.722937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.723342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.723373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.723805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.723834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.724034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.724065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.724454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.724483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.724902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.724937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.725354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.725384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.725701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.725729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.726140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.726167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.726548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.726577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.726987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.727016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.727420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.727450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.727882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.727911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.728322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.728351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.728740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.728768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.729154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.729182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.729578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.729606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.730021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.730050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-16 00:41:22.730449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-16 00:41:22.730479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.730895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.730923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.731330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.731360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.731827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.731856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.732339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.732369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.732784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.732813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.733222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.733263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.733654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.733682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.734121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.734150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.734539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.734568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.734965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.734994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.735455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.735487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.735897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.735924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.736340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.736371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.736713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.736742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.737033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.737060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.737491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.737520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.737913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.737941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.738248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.738277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.738689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.738716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.739112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.739140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.739548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.739578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.739899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.739926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.740294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.740323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.740735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.740763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.741173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.741202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.741602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.741632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.742051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.742084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.742365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.742397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.742678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.742705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.743111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.743139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.743571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.743601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.744025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.744052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.744453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.744482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.744857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.744886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.745269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.745299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.745692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.745721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.745912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.745939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.746274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.746303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.746603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.746630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.747009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.747036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-16 00:41:22.747424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-16 00:41:22.747455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.747858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.747885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.748304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.748334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.748735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.748762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.749180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.749208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.749636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.749665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.750000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.750029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.750446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.750477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.750868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.750897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.751304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.751333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.751627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.751657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.752072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.752100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.752463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.752493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.752928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.752957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.753366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.753396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.753835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.753863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.754286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.754314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.754716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.754744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.755154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.755182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.755594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.755623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.756096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.756124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.756532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.756562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.756976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.757006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.757499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.757529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.757953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.757986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.758387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.758415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.758731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.758764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.759180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.759209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.759635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.759664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.759957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.759984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.760418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.760448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.760748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.760776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.761241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.761270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.761697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.761725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.762136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.762164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.762617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.762646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.762932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.762959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.763359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.763388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.763771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.763798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.764125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.764153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.764369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-16 00:41:22.764398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-16 00:41:22.764804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.764833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.765253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.765282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.765680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.765708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.766103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.766131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.766424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.766455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.766885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.766913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.767380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.767409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.767717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.767747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.768048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.768076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.768287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.768316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.768763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.768792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.769115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.769146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.769557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.769588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.769993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.770021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.770411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.770441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.770843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.770871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.771290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.771320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.771731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.771759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.772205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.772271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.772710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.772739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.773133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.773161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.773655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.773686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.774065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.774093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.774521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.774550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.774932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.774960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.775348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.775382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.775535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.775562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.775960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.775988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.776259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.776290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.776687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.776717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.777119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.777147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.777423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.777455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.777868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.777896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.778174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.778204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.778641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.778671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.778998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.779027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.779336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.779364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.779749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.779777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.780060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.780087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-16 00:41:22.780397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-16 00:41:22.780429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.780832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.780860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.781264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.781295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.781610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.781637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.781966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.781994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.782408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.782437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.782847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.782876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.783275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.783304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.783694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.783723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.784179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.784207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.784677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.784706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.785102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.785130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.785528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.785558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.785882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.785911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.786203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.786254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.786751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.786779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.787180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.787208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.787502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.787531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.787909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.787937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.788368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.788398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.788856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.788885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.789303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.789333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.789598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.789625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.790055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.790083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.790382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.790411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.790726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.790754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.791045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.791081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.791563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.791592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.791862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.791892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.792251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-16 00:41:22.792280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-16 00:41:22.792583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.792611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.793000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.793028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.793311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.793340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.793665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.793695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.794098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.794126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.794524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.794554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.794954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.794983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.795372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.795402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.795773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.795800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.796177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.796205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.796544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.796575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.796978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.797006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.797404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.797434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.797853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.797882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.798316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.798345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.798722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.798751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.799058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.799086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.799478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.799507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.799985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.800013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.800317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.800345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.800787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.800815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.801203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.801241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.801660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.801688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.802123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.802157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.802636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.802665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.803072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.803101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.803400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.803429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.803869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.803897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.804304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.804333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.804625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.804653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.804928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.804956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.805359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.805389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.805793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.805820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.806132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.806159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.806507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.806536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.806933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.806961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.807282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.807310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.807755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.807783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.808137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.808164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.808580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.808609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-16 00:41:22.809076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-16 00:41:22.809105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.809521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.809550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.809945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.809973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.810336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.810364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.810693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.810721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.811117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.811146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.811544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.811574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.811975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.812003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.812377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.812406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.812690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.812719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.813157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.813186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.813580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.813610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.814033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.814061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.814418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.814448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.814886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.814914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.815315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.815344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.815756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.815785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.816166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.816196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.816584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.816614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.817022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.817051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.817436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.817466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.817738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.817768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.818047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.818078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.818476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.818511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.818901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.818929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.819417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.819446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.819839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.819867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.820288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.820318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.820728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.820756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.821165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.821193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.821590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.821619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.822032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.822060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.822439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.822469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.822872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.822901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.823251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.823280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.823712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.823741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.824141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.824169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.824553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.824582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.824959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.824987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.825403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.825433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.825848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.825876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-16 00:41:22.826269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-16 00:41:22.826298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.826778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.826806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.827217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.827258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.827646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.827674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.828105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.828134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.828625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.828655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.829051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.829080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.829506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.829536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.829819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.829848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.830250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.830281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.830710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.830739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.831119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.831147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.831525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.831556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.831970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.831999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.832410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.832439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.832840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.832868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.833296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.833325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.833760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.833789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.834195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.834223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.834661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.834689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.835080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.835108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.835505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.835533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.835894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.835927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.836341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.836370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.836801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.836829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.837221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.837272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.837668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.837696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.838108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.838135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.838417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.838445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.838865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.838893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.839303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.839333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.839745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.839773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.840180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.840208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.840618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.840647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.841049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.841077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.841549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.841579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.841879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.841910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.842338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.842368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.842782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.842810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.843214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.843252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.843643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-16 00:41:22.843671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-16 00:41:22.844135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.844162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.844595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.844623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.845030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.845057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.845511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.845540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.845957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.845985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.846371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.846400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.846799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.846828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.847252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.847281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.847694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.847723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.848144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.848171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.848587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.848616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.849011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.849038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.849448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.849477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.849875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.849903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.850169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.850199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.850599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.850628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.851013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.851041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.851441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.851469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.851868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.851897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.852315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.852344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.852718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.852745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.853148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.853181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.853587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.853616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.853904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.853931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.854362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.854392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.854785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.854813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.855219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.855259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.855651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.855678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.856158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.856187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.856576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.856606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.857003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.857031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.857324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.857354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.857773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.857800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.858101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.858129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-16 00:41:22.858538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-16 00:41:22.858568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.858961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.858989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.859404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.859433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.859699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.859729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.860114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.860142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.860572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.860601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.860986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.861014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.861419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.861449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.861740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.861770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.862169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.862198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.862616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.862646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.863052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.863080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-16 00:41:22.863516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-16 00:41:22.863545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.863940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.863968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.864262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.864294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.864682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.864711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.865087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.865115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.865560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.865589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.865995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.866023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.866431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.866459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.866867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.866897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.867275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.867304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.867708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.867737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.868130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.868158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.868577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.868606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.869009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.869038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.869418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.869447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.869863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.869896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.870191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.870219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.870646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.870675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.871070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.871099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.871543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.871572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.871980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.872008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-16 00:41:22.872405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-16 00:41:22.872433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.872706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.872736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.873133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.873162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.873561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.873591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.873990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.874018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.874446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.874476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.874883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.874910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.875259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.875287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.875675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.875702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.876086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.876114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.876399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.876428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.876853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.876880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.877163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.877195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.877605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.877634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.878032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.878060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.878462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.878491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.878832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.878860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.879289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.879317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.879731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.879759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.880170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.880197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.880555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.880584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.880876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.880907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.881308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.881338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.881599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.881628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.882072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.882100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.882475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.882504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.882905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.882933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.883340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-16 00:41:22.883369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-16 00:41:22.883797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.883826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.884205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.884243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.884607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.884635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.885021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.885048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.885529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.885557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.885952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.885980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.886219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.886270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.886686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.886714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.886968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.886995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.887393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.887422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.887803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.887831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.888226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.888265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.888654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.888682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.889158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.889185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.889591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.889620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.890039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.890067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.890456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.890487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.890906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.890935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.891344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.891374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.891622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.891654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.892052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.892080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.892479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.892508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.892957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.892985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.893303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.893332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.893762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.893789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.894144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.894171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.894576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.894605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.894913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-16 00:41:22.894940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-16 00:41:22.895390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.895419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.895804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.895833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.896135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.896164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.896555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.896583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.896982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.897009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.897324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.897353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.897710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.897738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.898088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.898117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.898511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.898541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.898822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.898850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.899282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.899311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.899719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.899746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.900148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.900184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.900610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.900640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.901048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.901075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.901469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.901888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.901916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.902301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.902330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.902748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.902781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.903198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.903226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.903647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.903676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.904102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.904130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.904516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.904544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.904918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.904948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.905244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.905273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.905663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.905691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.906051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.906080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.906460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-16 00:41:22.906490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-16 00:41:22.906880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.906908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.907206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.907243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.907618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.907647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.908060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.908088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.908483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.908513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.908907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.908935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.909336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.909365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.909783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.909812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.910242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.910271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.910635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.910662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.911078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.911106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.911515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.911544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.911926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.911955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.912343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.912372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.912781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.912809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.913218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.913255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.913654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.913682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.914142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.914171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.914454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.914484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.914896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.914925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.915310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.915339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.915726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.915754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.916150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.916178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.916629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.916658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.917049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.917077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.917542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.917571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.917983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.918011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.918409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.918439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-16 00:41:22.918884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-16 00:41:22.918913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.919312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.919341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.919743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.919776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.920070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.920097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.920509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.920538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.920927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.920956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.921383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.921411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.921799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.921827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.922288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.922317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.922602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.922629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.922994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.923022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.923339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.923367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.923809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.923836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.924241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.924269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.924575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.924605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.925029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.925057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.925444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.925473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.925908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.925935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.926304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.926333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.926814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.926843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.927144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.927173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.927512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.927540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.927949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.927977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.928428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.928458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.928870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.928898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.929174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.929203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.929608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.929637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-16 00:41:22.930039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-16 00:41:22.930067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.930474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.930503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.930931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.930960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.931259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.931289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.931625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.931653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.932046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.932073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.932487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.932516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.932967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.932995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.933395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.933425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.933863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.933891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.934320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.934349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.934766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.934794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.935203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.935242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.935625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.935654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.936053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.936081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.936479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.936514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.936887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.936915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.937342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.937370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.937785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.937813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.938219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.938260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-16 00:41:22.938655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-16 00:41:22.938683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.938961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.938990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.939404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.939434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.939845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.939872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.940279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.940308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.940701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.940730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.941024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.941052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.941450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.941479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.941901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.941928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.942314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.942343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.942796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.942824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.943247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.943277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.943659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.943688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.944076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.944104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.944505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.944534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.944938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.944966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.945363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.945393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.945814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.945843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.946198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.946226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.946649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.946677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.947072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.947100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.947496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.947525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.947937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.947965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.948378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.948407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.948808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.948836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.949256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.949285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.949674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.949703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.950093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.950120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-16 00:41:22.950513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-16 00:41:22.950542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.950845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.950875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.951268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.951298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.951744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.951772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.952173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.952201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.952604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.952633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.953046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.953074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.953491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.953525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.953801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.953829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.954249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.954278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.954647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.954675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.955081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.955109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.955509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.955539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.955947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.955975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.956312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.956342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.956758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.956786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.957056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.957086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.957497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.957526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.957925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.957953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.958360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.958390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.958655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.958684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.959101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.959129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.959401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.959431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.959816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.959844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.960254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.960283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.960775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.960803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.961193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.961220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.961639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-16 00:41:22.961667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-16 00:41:22.962061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.962088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.962506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.962536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.962839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.962868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.963271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.963301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.963698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.963725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.964146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.964175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.964531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.964562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.964976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.965004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.965331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.965367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.965763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.965791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.966177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.966205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.966511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.966540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.966947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.966974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.967362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.967391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.967804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.967833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.968249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.968279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.968760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.968789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.969181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.969208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.969607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.969637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.970049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.970082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.970503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.970532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.970949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.970976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.971267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.971300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.971688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.971716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.972132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.972160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.972507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.972535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.972902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.972932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-16 00:41:22.973329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-16 00:41:22.973359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.973637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.973664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.973966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.973994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.974419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.974448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.974799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.974827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.975201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.975238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.975710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.975740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.976123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.976150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.976536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.976565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.977013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.977040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.977389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.977417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.977815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.977843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.978225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.978279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.978636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.978664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.978931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.978961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.979380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.979409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.979796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.979824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.980220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.980257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.980701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.980729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.981154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.981182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.981579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.981608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.981904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.981931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.982215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.982263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.982645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.982672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.983090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.983118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.983560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.983589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.983973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.984001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.984389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.984417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-16 00:41:22.984720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-16 00:41:22.984747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.985151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.985179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.985587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.985616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.985961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.985988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.986397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.986431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.986836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.986865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.987284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.987314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.987765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.987792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.988175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.988202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.988610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.988639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.989064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.989091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.989493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.989522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.989943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.989971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.990380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.990410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.990774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.990803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.991241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.991271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.991694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.991722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.992082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.992110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.992530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.992559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.993029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.993057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.993468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.993497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.993784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.993816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.994223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.994261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.994622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.994649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.995027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.995055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.995393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.995422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.995827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.995854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.996251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.996280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.996692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.996720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.997004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.997035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.997455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.997485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.997827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.997856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.998226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.998264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.998686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.998715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-16 00:41:22.999102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-16 00:41:22.999130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:22.999558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:22.999587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:22.999999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.000029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.000431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.000460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.000852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.000879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.001309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.001338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.001713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.001742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.002137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.002164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.002571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.002600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.002949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.002977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.003351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.003386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.003761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.003789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.004212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.004251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.406 [2024-07-16 00:41:23.004634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.406 [2024-07-16 00:41:23.004663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.406 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.005074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.005105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.005547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.005576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.005965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.005993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.006413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.006442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.006854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.006882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.007288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.007318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.007748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.007777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.008194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.008223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.008644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.008673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.009124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.009153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.009541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.009569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.009949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.009978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.010381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.010410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.010733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.010760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.011048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.011077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.011500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.011530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.011982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.012010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.012318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.012347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.012779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.012808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.013201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.013240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.013602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.013631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.013934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.013962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.014358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.014386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.014880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.014912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.015324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.015354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.015770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.015799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.016106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.016133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.016537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.016566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.016981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.017009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.017425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.017454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.017812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.017841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.018261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.018290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.018663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.678 [2024-07-16 00:41:23.018691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.678 qpair failed and we were unable to recover it. 00:30:09.678 [2024-07-16 00:41:23.019110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.019137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.019416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.019445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.019728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.019759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.020169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.020197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.020596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.020625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.021027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.021055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.021380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.021409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.021832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.021860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.022285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.022314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.022731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.022759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.023142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.023174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.023487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.023516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.023816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.023844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.024197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.024225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.024570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.024598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.024936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.024963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.025459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.025489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.025801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.025831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.026260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.026289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.026693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.026722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.027138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.027166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.027569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.027599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.028003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.028031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.028450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.028479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.028789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.028817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.029262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.029291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.029597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.029625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.029917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.029945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.030373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.030402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.030830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.030858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.031251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.031286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.031565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.031593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.031978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.032006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.032410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.032439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.032672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.032700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.033092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.033120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.033534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.033563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.033941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.033969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-16 00:41:23.034370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-16 00:41:23.034399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.034803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.034831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.035228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.035269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.035706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.035733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.036137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.036165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.036576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.036605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.037047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.037075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.037351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.037380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.037785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.037813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.038218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.038257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.038669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.038703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.039123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.039151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.039344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.039375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.039757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.039786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.040202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.040244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.040534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.040562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.041028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.041056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.041471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.041499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.041918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.041946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.042270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.042299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.042696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.042724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.042981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.043011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.043317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.043348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.043809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.043838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.044261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.044291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.044743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.044773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.045172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.045200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.045617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.045647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.045945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.045974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.046374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.046403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.046802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.046831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.047251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.047579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.047623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.048026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.048055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.048250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.048278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.048676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.048705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.049117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.049146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-16 00:41:23.049537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-16 00:41:23.049566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.049971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.050000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.050338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.050367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.050775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.050804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.051254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.051283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.051677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.051706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.052041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.052070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.052512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.052543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.052966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.052994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.053355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.053385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.053810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.053838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.054110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.054138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.054542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.054572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.054984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.055013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.055352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.055381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.055774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.055802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.056220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.056262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.056676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.056704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.057133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.057163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.057573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.057603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.058024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.058052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.058363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.058391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.058564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.058598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.058990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.059018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.059431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.059461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.059833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.059862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.060277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.060307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.060734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.060762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.061101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.061129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.061567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.061597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.061875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.061905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.062327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.062358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.062780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.062809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.063067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.063096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.063377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.063407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.063838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.063874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.064301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.064330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.064615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.064645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.065031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.065060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.065461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.065490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-16 00:41:23.065895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-16 00:41:23.065923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.066326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.066356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.066729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.066757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.067154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.067182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.067608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.067637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.068056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.068084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.068531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.068561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.068842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.068876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.069282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.069312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.069736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.069764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.070159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.070187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.070576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.070606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.071008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.071037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.071416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.071445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.071833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.071861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.072323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.072352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.072763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.072791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.073200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.073228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.073626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.073655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.074040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.074068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.074361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.074389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.074810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.074839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.075208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.075246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.075629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.075657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.076070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.076099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.076511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.076541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.076905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.076932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.077325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.077355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.077744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.077773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-16 00:41:23.078065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-16 00:41:23.078096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.078487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.078517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.078936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.078964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.079382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.079411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.079809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.079837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.080265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.080307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.080720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.080754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.081172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.081201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.081600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.081631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.082049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.082077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.082340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.082372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.082782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.082810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.083201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.083228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.083649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.083677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.084062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.084091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.084521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.084551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.084844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.084875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.085294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.085323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.085743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.085771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.086183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.086211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.086639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.086668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.087088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.087117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.087512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.087543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.087795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.087822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.088210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.088249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.088527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.088555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.088980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.089007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.089429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.089459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.089868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.089896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.090296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.090325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.090710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.090738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.091152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.091180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.091563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.091592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.092015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.092043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.092450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.092480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.092912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.092940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.093355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.093384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.093797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-16 00:41:23.093824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-16 00:41:23.094222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.094262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.094691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.094718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.095133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.095161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.095525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.095555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.095858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.095889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.096291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.096321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.096616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.096646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.097055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.097083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.097482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.097519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.097981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.098008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.098379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.098409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.098883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.098911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.099303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.099332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.099741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.099769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.100266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.100297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.100711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.100738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.101144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.101172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.101586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.101615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.101989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.102016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.102397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.102426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.102859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.102887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.103295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.103324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.103607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.103635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.104056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.104084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.104569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.104597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.104977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.105005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.105409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.105438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.105903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.105931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.106350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.106379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.106773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.106801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.107187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.107215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.107492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.107521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.107944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.107972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.108357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.108387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.108662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.108692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.109111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.109139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.109569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.109599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.109892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.109928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-16 00:41:23.110339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-16 00:41:23.110369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.110687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.110715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.111122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.111150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.111539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.111568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.111915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.111944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.112373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.112401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.112802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.112830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.113219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.113258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.113723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.113750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.114145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.114174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.114574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.114608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.114995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.115024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.115318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.115351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.115743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.115771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.116192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.116221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.116617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.116646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.116987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.117015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.117420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.117450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.117868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.117896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.118176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.118207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.118567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.118597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.119009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.119037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.119458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.119486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.119786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.119815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.120111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.120139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.120588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.120617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.121010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.121038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.121458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.121487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.121889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.121918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.122373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.122402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.122809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.122837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.123263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.123293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.123750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.123778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.124151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.124178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.124597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.124626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.125001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.125029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.125324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.125356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.125782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.125810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-16 00:41:23.126226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-16 00:41:23.126265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.126658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.126686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.127087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.127115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.127537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.127566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.128035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.128063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.128456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.128484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.128842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.128871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.129263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.129292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.129676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.129705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.130103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.130131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.130531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.130561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.130991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.131019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.131415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.131455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.131871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.131899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.132198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.132227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.132650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.132679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.133077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.133106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.133516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.133545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.133959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.133988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.134285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.134313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.134788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.134816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.135117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.135145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.135557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.135586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.136087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.136115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.136528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.136557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.136975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.137003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.137411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.137441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.137826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.137854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.138121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.138148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.138592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.138621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.139041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.139070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.139443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.139473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.139898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.139927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.140341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.140371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.140763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.140791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.141264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.141294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.141689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.142022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.142048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.142482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.142511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.142867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-16 00:41:23.142895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-16 00:41:23.143286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.143315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.143734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.143762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.144175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.144203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.144599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.144628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.145105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.145132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.145564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.145593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.146000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.146028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.146425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.146455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.146929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.146957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.147361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.147390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.147829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.147858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.148272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.148300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.148729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.148762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.149163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.149191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.149606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.149636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.150065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.150093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.150486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.150515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.150924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.150953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.151349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.151379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.151810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.151838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.152255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.152283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.152703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.152731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.153191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.153219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.153528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.153559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.153975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.154003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.154415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.154444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.154850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.154878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.155303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.155331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.155758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.155786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.156061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.156090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.156504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.156533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.156925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-16 00:41:23.156954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-16 00:41:23.157309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.157339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.157646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.157674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.158083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.158112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.158505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.158536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.158809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.158838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.159263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.159293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.159703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.159731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.160156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.160185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.160595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.160624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.161018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.161046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.161448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.161477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.161891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.161919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.162342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.162372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.162779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.162808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.163082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.163113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.163550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.163579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.163872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.163900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.164269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.164298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.164689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.164717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.165113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.165141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.165523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.165558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.165948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.165976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.166387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.166417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.166826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.166855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.167275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.167304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.167721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.167749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.168158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.168186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.168604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.168633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.169027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.169056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.169438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.169467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.169863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.169890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.170289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.170318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.170708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.170736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.171160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.171188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.171586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.171616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.172009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.172037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.172429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.172458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.172859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.172888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.173288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.173317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.173709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.173737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.174136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.174164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-16 00:41:23.174562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-16 00:41:23.174592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.175007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.175035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.175438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.175467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.175861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.175889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.176204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.176241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.176516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.176545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.176980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.177009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.177438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.177467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.177914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.177943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.178342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.178372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.178795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.178823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.179250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.179280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.179670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.179698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.180106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.180134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.180562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.180592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.180932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.180960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.181354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.181383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.181792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.181820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.182214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.182270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.182639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.182673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.183059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.183088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.183513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.183543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.183894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.183921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.184334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.184378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.184800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.184828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.185250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.185279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.185638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.185666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.186084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.186111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.186502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.186531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.186949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.186977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.187361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.187391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.187704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.187732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.188022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.188049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.188436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.188465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.188856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.188884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.189285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.189314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.189759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.189787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.190197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.190224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.190627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.190656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.191047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.191074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-16 00:41:23.191386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-16 00:41:23.191416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.191765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.191794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.192214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.192252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.192641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.192670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.193084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.193113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.193509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.193538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.193941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.193970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.194365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.194395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.194782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.194810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.195225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.195263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.195683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.195711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.196092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.196120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.196548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.196577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.196985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.197014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.197438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.197467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.197857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.197885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.198298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.198327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.198721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.198749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.198950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.198978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.199344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.199379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.199776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.199804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.200179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.200208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.200600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.200629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.201044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.201071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.201456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.201485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.201767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.201797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.202207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.202245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.202530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.202560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.202963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.202991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.203403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.203432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.203837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.203865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.204265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.204295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.204710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.204738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.205149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.205177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.205367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.205399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.205813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.205840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.206155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.206183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.206603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.206632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-16 00:41:23.207030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-16 00:41:23.207058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.207453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.207483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.207779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.207809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.208239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.208268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.208709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.208737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.209129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.209157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.209458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.209487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.209891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.209920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.210313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.210343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.210630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.210661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.210952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.210983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.211386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.211415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.211696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.211727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.212122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.212151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.212537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.212566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.212982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.213010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.213399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.213429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.213703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.213734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.214143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.214171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.214466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.214498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.214901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.214930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.215326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.215366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.215755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.215783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.216173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.216201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.216602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.216631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.217066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.217095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.217518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.217547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.217960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.217988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-16 00:41:23.218372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-16 00:41:23.218401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.218813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.219137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.219164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.219473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.219502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.219813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.219841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.220203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.220241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.220545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.220573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.220983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.221012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.221441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.221470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.221867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.221895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.222351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.222380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.222682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.222711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.223126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.223155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.223571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.223601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.224010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.224037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.224386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.224415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.224807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.224834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.225239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.225269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.225700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.225728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.226138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.226165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.226573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.226604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.227029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.227057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.227400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.227428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.227725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.227753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.228188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.228215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.228620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.228648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.229060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.229089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.229383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.229412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.229826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.229854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.230249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.230278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.230771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.230799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.231082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.231109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.231491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.231520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.231916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.231943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.232397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.232426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.232723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.232753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.233134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.233162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.233595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.233624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-16 00:41:23.233912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-16 00:41:23.233941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.234357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.234387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.234811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.234839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.235306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.235335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.235738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.235766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.236123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.236151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.236558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.236587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.236960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.236987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.237392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.237421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.237834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.237862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.238261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.238289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.238714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.238742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.239161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.239189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.239588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.239619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.240040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.240068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.240486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.240515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.240858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.240886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.241290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.241318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.241757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.241784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.242096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.242123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.242506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.242536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.242951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.242979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.243374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.243408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.243696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.243727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.244011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.244042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.244449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.244479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.244877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.244905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.245297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.245326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.245738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.245766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.246177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.246205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.246604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.246633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.246916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.246947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.247259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.247288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.247709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.247737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.248133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.248161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.248539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.248569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.248864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.248893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.249135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.249163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.249550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.249579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.249977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.250004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.250298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.250330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.250759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-16 00:41:23.250787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-16 00:41:23.251210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.251247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.251633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.251661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.252069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.252097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.252515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.252543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.252936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.252965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.253394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.253423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.253835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.253863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.254271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.254300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.254645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.254674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.255143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.255170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.255553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.255581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.255962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.255991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.256442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.256471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.256895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.256922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.257337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.257365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.257790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.257819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.258283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.258311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.258650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.258678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.259057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.259085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.259425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.259454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.259829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.259863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.260268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.260297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.260714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.260743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.261156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.261184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.261452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.261483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.261892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.261920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.262304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.262333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.262772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.262800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.263273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.263303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.263748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.263776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.264188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.264217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.264613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.264642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.265045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.265073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.265507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.265536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.265832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.265860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.266265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.266294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.266689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.266717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.267014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.267042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.267339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.267369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.267789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.267816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-16 00:41:23.268213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-16 00:41:23.268252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.268645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.268674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.269057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.269085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.269482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.269511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.269860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.269887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.270302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.270331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.270752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.270781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.271123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.271151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.271431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.271460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.271899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.271927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.272308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.272337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.272728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.272755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.273155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.273183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.273584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.273614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.274036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.274064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.274460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.274490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.274910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.274939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.275363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.275392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.275808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.275836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.276250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.276278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.276686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.276719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.277102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.277129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.277546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.277575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.277870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.277898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.278259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.278289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.278716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.278744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.279155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.279183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.279492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.279524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.279804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.279833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.280263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.280293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.280722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.280750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.281059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.281086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.281489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.281518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.281915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.281943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.282358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.282387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.282787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.282816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.283212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.283267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.283723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.283751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.284173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.284201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.284625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.284654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.285063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.285091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-16 00:41:23.285240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-16 00:41:23.285268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.285694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.285722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.286021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.286049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.286448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.286476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.286898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.286926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.287316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.287344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.287683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.287712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.288108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.288135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Read completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 Write completed with error (sct=0, sc=8) 00:30:09.696 starting I/O failed 00:30:09.696 [2024-07-16 00:41:23.288500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.696 [2024-07-16 00:41:23.288997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.289016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.289558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.289619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.290032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.290046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.290449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.290510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.290923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.290936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.291475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.291537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.291922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.291935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.292294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.292305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.292690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.292701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.293095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.293106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.293464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.293476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-16 00:41:23.293875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-16 00:41:23.293886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.294338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.294348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.294707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.294718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.295109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.295120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.295489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.295501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.295854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.295864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.296219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.296233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.296504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.296514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.296865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.296875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.297240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.297252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.297546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.297556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.297849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.297859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.298135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.298145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-16 00:41:23.298567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-16 00:41:23.298579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.298956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.298971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.299382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.299393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.299768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.299779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.300033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.300043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.300398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.300408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.300765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.300775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.301154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.301164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.301639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.301652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.302009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.302020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.302436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.302446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.302821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.302831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.303186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.303196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.303492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.303503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.303894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.303905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.304162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.304172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.304557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.304567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.304920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.304930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.305284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.305294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.305590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.305601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.305980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.305989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.306350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.306363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.306721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.306730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.306975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.306986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.307376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.307386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.307760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.307770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.308173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.308183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.308611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.308622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.308966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.308976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.309321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.309331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.309726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.309736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.310008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.310019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.310281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.310291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.970 qpair failed and we were unable to recover it. 00:30:09.970 [2024-07-16 00:41:23.310666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.970 [2024-07-16 00:41:23.310676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.311033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.311043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.311398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.311408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.311820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.311830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.312223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.312243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.312487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.312497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.312879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.312889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.313242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.313252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.313622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.313632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.313891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.313902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.314342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.314352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.314696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.314706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.315139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.315148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.315498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.315508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.315858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.315868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.316220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.316235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.316590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.316600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.316955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.316965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.317325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.317335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.317691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.317702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.318055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.318064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.318420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.318431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.318803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.318814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.319193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.319202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.319475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.319485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.319774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.319784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.320159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.320170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.320605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.320615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.321000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.321010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.321371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.321384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.321740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.321750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.322016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.322026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.322379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.322389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.322632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.322644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.323028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.323038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.323388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.323399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.323755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.323765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.324158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.324169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.324526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.324536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.324886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.324896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.325244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.325255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.325658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.325667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.326033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.326044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.326533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.326595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.327071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.327085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.327558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.327620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.328034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.328046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.328452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.328512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.328904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.328917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.329205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.329216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.329498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.329508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.329895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.329905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.971 [2024-07-16 00:41:23.330255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.971 [2024-07-16 00:41:23.330266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.971 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.330712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.330722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.330979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.330991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.331254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.331264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.331646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.331667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.332015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.332026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.332317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.332328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.332706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.332716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.333124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.333134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.333381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.333391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.333770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.333780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.334166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.334178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.334553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.334563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.334952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.334962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.335309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.335319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.335669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.335680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.336047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.336058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.336430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.336440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.336804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.336815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.337158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.337168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.337533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.337544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.337893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.337903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.338152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.338163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.338510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.338520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.338871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.338881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.339238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.339248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.339610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.339620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.339973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.339983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.340331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.340342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.340603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.340614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.340985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.340996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.341262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.341272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.341665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.341674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.342023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.342032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.342397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.342407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.342784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.342793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.343060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.343070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.343377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.343387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.343716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.343726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.344096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.344106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.344409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.344420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.344776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.344787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.345225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.345241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.345591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.345600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.345944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.345954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.346306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.346318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.346666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.346677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.347055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.347065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.347439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.347449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.347799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.347809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.348250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.348261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.348618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.348628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.348888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.348898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.349287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.349298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.349660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.349670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.350019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.350029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.350469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.350479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.350832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.350842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.351055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.351066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.351402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.351414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.351820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.351831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.352183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.352194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.352603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.972 [2024-07-16 00:41:23.352613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.972 qpair failed and we were unable to recover it. 00:30:09.972 [2024-07-16 00:41:23.352829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.352841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.353266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.353276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.353648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.353657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.354003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.354013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.354351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.354361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.354742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.354753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.355109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.355119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.355378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.355389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.355753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.355764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.356131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.356145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.356508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.356518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.356770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.356781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.357152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.357163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.357525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.357535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.357888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.357897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.358247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.358257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.358655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.358665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.359050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.359060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.359423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.359433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.359783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.359794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.360209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.360219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.360588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.360599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.360990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.361001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.361375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.361386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.361763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.361773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.362169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.362179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.362527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.362537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.362882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.362891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.363240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.363250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.363504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.363513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.363910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.363921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.364288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.364299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.364666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.364675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.365019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.365029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.365368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.365378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.365721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.365731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.365934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.365945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.366316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.366326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.366675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.366685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.367026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.367035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.367393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.367404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.367614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.367625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.367967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.367977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.368322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.368332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.368673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.368683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.369045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.369054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.369408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.369417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.369614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.369625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.370005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.370015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.370400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.370410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.370755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.370768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.371131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.371142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.371417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.371427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.973 [2024-07-16 00:41:23.371772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.973 [2024-07-16 00:41:23.371781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.973 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.372132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.372141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.372383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.372393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.372787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.372797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.373161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.373172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.373564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.373574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.373955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.373965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.374161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.374172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.374551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.374562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.374911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.374921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.375276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.375286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.375667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.375677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.375925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.375936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.376273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.376283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.376619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.376629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.376978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.376988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.377367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.377377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.377762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.377771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.378117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.378126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.378363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.378372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.378746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.378756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.379125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.379135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.379376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.379387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.379658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.379668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.380035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.380045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.380308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.380318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.380628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.380638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.381012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.381022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.381283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.381293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.381654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.381663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.382102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.382111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.382358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.382370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.382739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.383179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.383188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.383637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.383648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.384014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.384024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.384428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.384438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.384828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.384838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.385101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.385110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.385462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.385472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.385948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.385958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.386306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.386316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.386669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.386678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.387020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.387031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.387408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.387418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.387768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.387778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.388026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.388037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.388398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.388408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.388729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.388739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.389083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.389093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.389442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.389452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.389797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.389807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.390087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.390097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.390569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.390579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.390919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.390929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.391280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.391291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.391684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.391695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.392042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.392053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.392417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.392428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.392768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.392777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.393123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.393132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-16 00:41:23.393542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-16 00:41:23.393553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.393939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.393950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.394320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.394330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.394700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.394709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.395092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.395104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.395453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.395463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.395822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.395831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.396173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.396183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.396617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.396628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.396984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.396995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.397361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.397371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.397652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.397663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.398055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.398064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.398408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.398418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.398769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.398778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.399124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.399134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.399409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.399419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.399804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.399813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.400147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.400158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.400545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.400554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.400902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.400911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.401254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.401264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.401641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.401651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.402008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.402017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.402361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.402371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.402715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.402725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.402962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.402973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.403343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.403353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.403698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.403707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.404006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.404016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.404368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.404378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.404743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.404753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.405013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.405023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.405379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.405389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.405734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.405743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.406095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.406104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.406459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.406469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.406715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.406726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.407082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.407091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.407352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.407363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.407723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.407732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.407984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.407993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.408335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.408345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.408695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.408705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.409050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.409059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.409402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.409412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.409675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.409685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.409934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.409944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.410195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.410205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.410470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.410480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.410817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.410827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.411163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.411173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.411563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.411573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.411916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.411925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.412165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.412175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.412522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.412533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.412801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.412811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.413194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.413205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.413579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.413590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.413955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.413965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.414306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.414317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-16 00:41:23.414670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-16 00:41:23.414680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.415020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.415029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.415367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.415377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.415725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.415735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.415999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.416009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.416393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.416403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.416776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.416786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.417131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.417140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.417499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.417509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.417842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.417852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.418193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.418203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.418547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.418560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.418989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.418999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.419262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.419272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.419656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.419667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.419896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.419907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.420222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.420244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.420590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.420600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.421012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.421022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.421370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.421380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.421629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.421640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.422004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.422013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.422356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.422366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.422710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.422719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.422954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.422964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.423330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.423340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.423682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.423692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.424104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.424114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.424505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.424515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.424859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.424868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.425209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.425218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.425563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.425573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.425912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.425922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.426122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.426134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.426469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.426480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.426830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.426839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.427178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.427188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.427460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.427470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.427832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.427842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.428036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.428047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.428420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.428430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.428796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.428805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.428997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.429009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.429360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.429370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.429738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.429748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.430134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.430144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.430340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.430352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.430706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.430716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.431052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.431062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.431431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.431441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.431781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.431791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-16 00:41:23.431988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-16 00:41:23.431998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.432372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.432385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.432593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.432604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.432983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.432993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.433342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.433352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.433694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.433703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.434053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.434062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.434405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.434416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.434755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.434764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.435129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.435138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.435507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.435517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.435861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.435872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.436287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.436297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.436688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.436697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.437032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.437041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.437379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.437389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.437776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.437785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.438124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.438134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.438494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.438505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.438917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.438927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.439269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.439279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.439607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.439617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.439977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.439987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.440389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.440400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.440783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.440793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.441147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.441156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.441612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.441622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.441964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.441973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.442314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.442327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.442698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.442707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.443063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.443073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.443329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.443340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.443692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.443701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.444031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.444041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.444377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.444387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.444723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.444732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.445073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.445082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.445422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.445431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.445774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.445784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.446145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.446155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.446510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.446520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.446853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.446863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.447242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.447253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.447641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.447650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.447993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.448002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.448338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.448348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.448682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.448692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.449037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.449046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.449404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.449414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.449770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.449780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.450175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.450186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.450547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.450557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.450902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.450911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.451176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.451185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.451442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.451452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.451812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.451821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.452162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.452172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.452539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.452549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.452891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.452901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.453287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.453298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.453695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.453705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.454041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-16 00:41:23.454050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-16 00:41:23.454390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.454400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.454669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.454679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.455019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.455029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.455396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.455406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.455759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.455768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.456110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.456119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.456541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.456552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.456908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.456921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.457286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.457296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.457668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.457677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.458020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.458030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.458369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.458379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.458717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.458726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.459064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.459073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.459422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.459433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.459806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.459816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.460020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.460032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.460396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.460407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.460743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.460753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.461116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.461125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.461497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.461507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.461848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.461858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.462198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.462208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.462559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.462568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.462904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.462914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.463256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.463266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.463533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.463542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.463921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.463930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.464168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.464178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.464546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.464555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.464897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.464907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.465248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.465257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.465501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.465510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.465843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.465852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.466191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.466204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.466562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.466573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.466935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.466945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.467305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.467315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.467663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.467672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.468011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.468021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.468359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.468369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.468731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.468741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.469017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.469027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.469391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.469401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.469804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.469814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.470178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.470189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.470614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.470624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.470975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.470984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.471340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.471350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.471766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.471775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.472194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.472203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.472456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.472465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.472810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.472819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.473159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.473169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.473514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.473523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.473807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.473817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.474074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.474084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.474447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.474457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.474868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.474878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.475209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.475218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.475574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.475585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-16 00:41:23.475942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-16 00:41:23.475952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.476338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.476349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.476690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.476699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.476992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.477002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.477358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.477368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.477753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.477763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.478095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.478105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.478461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.478472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.478830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.478840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.479190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.479200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.479618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.479628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.479962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.479972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.480311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.480321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.480540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.480550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.480911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.480923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.481184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.481194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.481540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.481550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.481931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.481941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.482372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.482382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.482741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.482750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.483086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.483096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.483435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.483445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.483824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.483833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.484210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.484220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.484582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.484592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.484969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.484980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.485341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.485350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.485688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.485697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.486031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.486041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.486385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.486395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.486761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.486771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.487027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.487037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.487395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.487405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.487744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.487754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.488096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.488105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.488444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.488454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.488782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.488791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.489138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.489148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.489489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.489499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.489793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.489802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.490184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.490194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.490573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.490586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.490842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.490852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.491188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.491198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.491426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.491436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.491835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.491845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.492182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.492191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.492530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.492540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.492876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.492886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.493266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.493277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.493634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.493644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-16 00:41:23.493990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-16 00:41:23.493999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.494336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.494346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.494692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.494702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.495088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.495097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.495463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.495473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.495829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.495838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.496180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.496189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.496522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.496532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.496891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.496901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.497165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.497175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.497524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.497535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.497871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.497880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.498224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.498250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.498616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.498626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.498877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.498886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.499251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.499261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.499701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.499711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.500048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.500057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.500396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.500407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.500766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.500776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.501116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.501125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.501496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.501507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.501888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.501899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1292289 Killed "${NVMF_APP[@]}" "$@" 00:30:09.980 [2024-07-16 00:41:23.502243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.502254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.502686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.502696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.502992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:09.980 [2024-07-16 00:41:23.503002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.503355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.503365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:09.980 [2024-07-16 00:41:23.503707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.503717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:09.980 [2024-07-16 00:41:23.503973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.503983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.980 [2024-07-16 00:41:23.504323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.504333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.504673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.504682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.505020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.505030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.505368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.505379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.505746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.505757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.506164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.506175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.506526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.506536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.506988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.506998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.507379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.507390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.507814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.507824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.508161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.508171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.508517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.508528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.508745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.508758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.509117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.509130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.509490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.509500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.509833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.509845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.510204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.510213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.510641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.510651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.510887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.510898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.511217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.511228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.511570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.511580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-16 00:41:23.511772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.511783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1293294 00:30:09.980 [2024-07-16 00:41:23.512102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.512116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1293294 00:30:09.980 [2024-07-16 00:41:23.512476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.512489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1293294 ']' 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:09.980 [2024-07-16 00:41:23.512843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.512855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.980 [2024-07-16 00:41:23.513240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.980 [2024-07-16 00:41:23.513256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.980 [2024-07-16 00:41:23.513626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.513639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.980 00:41:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.980 [2024-07-16 00:41:23.514000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-16 00:41:23.514013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.514399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.514411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.514765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.514777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.515138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.515150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.515527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.515538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.515897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.515907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.516299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.516311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.516682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.516693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.517074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.517086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.517450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.517463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.517823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.517834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.518194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.518205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.518561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.518572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.518929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.518941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.519317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.519328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.519697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.519709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.520069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.520081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.520355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.520367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.520708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.520719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.521064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.521077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.521434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.521445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.521800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.521810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.522191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.522201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.522538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.522550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.522731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.522742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.523107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.523118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.523494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.523506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.523853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.523863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.524200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.524211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.524640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.524652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.525018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.525028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.525386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.525398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.525757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.525766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.526108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.526118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.526459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.526470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.526832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.526842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.527194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.527208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.527619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.527631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.527880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.527890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.528213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.528222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.528654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.528665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.528997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.529006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.529342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.529353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.529690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.529701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.530035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.530047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.530364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.530374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.530747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.530758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.531122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.531132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.531537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.531547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.531834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.531843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.532193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.532203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.532631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.532643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.532969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.532980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.533386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.533398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.533734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.533745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.534082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.534094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.534451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.534463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.534861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.534872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.535209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.535220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.535556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.535566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.535856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.535866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.536169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.536179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.536608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.536618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-16 00:41:23.536962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-16 00:41:23.536974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.537311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.537321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.537719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.537729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.537997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.538007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.538389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.538399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.538756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.538766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.539059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.539068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.539423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.539433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.539699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.539708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.540044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.540054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.540253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.540266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.540483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.540494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.540866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.540877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.541255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.541266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.541646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.541657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.541864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.541876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.542205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.542215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.542597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.542608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.542933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.542943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.543280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.543290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.543649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.543660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.544022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.544032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.544388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.544399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.544830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.544841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.545201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.545212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.545482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.545493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.545878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.545887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.546225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.546241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.546671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.546681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.547012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.547022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.547352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.547363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.547595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.547605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.547950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.547960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.548294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.548304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.548638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.548649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.549034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.549045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.549403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.549413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.549742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.549752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.550085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.550095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.550438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.550449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.550808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.550818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.551156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.551167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.551502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.551512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.551849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.551859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.552117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.552127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.552387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.552397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.552769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.552779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.553121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.553131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.553484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.553494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-16 00:41:23.553827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-16 00:41:23.553837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.554169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.554179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.554519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.554529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.554907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.554916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.555249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.555260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.555596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.555606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.555944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.555954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.556286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.556296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.556676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.556686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.557117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.557126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.557469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.557479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.557806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.557816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.558153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.558163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.558528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.558539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.558915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.558926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.559284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.559295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.559702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.559712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.560052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.560062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.560427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.560436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.560776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.560785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.561116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.561126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.561396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.561406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.561773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.561783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.562046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.562056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.562425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.562435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.562772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.562782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.563035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.563045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.563382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.563392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.563621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.563632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.564000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.564010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.564387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.564397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.564736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.564746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.565078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.565089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.565115] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:30:09.983 [2024-07-16 00:41:23.565173] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.983 [2024-07-16 00:41:23.565518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.565530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.565915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.565925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.566281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.566291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.566625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.566636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.566888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.566898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.567285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.567296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.567661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.567671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.567891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.567901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.568259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.568270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.568607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.568617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.568871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.568882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.569244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.569255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.569630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.569641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.570019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.570029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.570385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.570395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.570742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.570752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.571109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.571119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.571470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.571482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.571848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.571858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.572295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.572308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.572652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.572662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.573000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.573010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.573216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.573225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.573622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.573632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.573967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.573977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.574314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.574325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.574690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.574701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.574965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.574976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-16 00:41:23.575339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-16 00:41:23.575350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.575695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.575704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.576041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.576050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.576390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.576401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.576653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.576663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.577014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.577024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.577358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.577368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.577734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.577744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.578078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.578088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.578437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.578447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.578841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.578852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.579199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.579209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.579479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.579490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.579842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.579852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.580140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.580150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.580530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.580541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.580876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.580886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.581219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.581228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.581564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.581573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.581907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.581918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.582175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.582185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.582536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.582545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.582886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.582896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.583233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.583243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.583564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.583573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.583949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.583961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.584338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.584348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.584764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.584773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.585104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.585115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.585494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.585504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.585765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.585774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.586158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.586168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.586518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.586529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.586773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.586783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.587139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.587150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.587476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.587485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.587819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.587829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.588166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.588175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-16 00:41:23.588541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-16 00:41:23.588551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.588883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.588894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.589227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.589242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.589658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.589668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.590006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.590016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.590357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.590368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.590663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.590673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.591051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.591062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.591321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.591330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.591692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.591702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.591898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.591909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.592272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.592283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.592617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.592626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.592959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.592969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.593279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.593289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.593625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.593635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.593993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.594004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.594369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.594379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.594795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.594805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.595146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.595156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.595616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.595626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.595927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.595937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.596268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.596279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.596614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.596624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.596951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.596961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.597324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.597335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.597673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.597683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.598017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.598027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.598374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.598385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.598736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.598746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.599080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.599090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.599397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.599408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.599753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.599763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.600094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.600104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.600459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.600470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.600818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.600828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-07-16 00:41:23.601156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-07-16 00:41:23.601166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.601487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.601497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.601821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.601831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.602197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.602206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.602555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.602565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.602888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.602898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.259 [2024-07-16 00:41:23.603246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.603258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.603614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.603624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.603957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.603966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.604342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.604353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.604693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.604702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.605041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.605051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.605383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.605393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.605737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.605748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.606031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.606041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.606400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.606412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.606661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.606670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.607020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.607031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.607353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.607363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.607668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.607684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.607916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.607925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.608278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.608289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.608632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.608642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.608894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.608904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.609263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.609274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.609588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.609597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.609966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.609976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.610248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.610259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.610586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.610596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.610927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.610937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.611344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.611355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.611640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.611650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.611976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.611985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.612320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.612330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.612666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.612675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.613052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.613063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.613317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.613328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.613682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.613692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.614101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.614111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.614448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.614458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.614798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.614808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-07-16 00:41:23.615141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-07-16 00:41:23.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.615354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.615364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.615736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.615746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.616079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.616090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.616477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.616488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.616898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.616909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.617260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.617270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.617654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.617665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.618037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.618048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.618373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.618383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.618722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.618732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.618968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.618977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.619320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.619331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.619667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.619678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.620063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.620073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.620406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.620417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.620790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.620800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.621133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.621142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.621491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.621501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.621832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.621844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.622198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.622209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.622562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.622572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.622908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.622918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.623242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.623253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.623595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.623605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.623942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.623952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.624295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.624305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.624645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.624655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.624901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.624911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.625155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.625165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.625364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.625376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.625741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.625751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.626087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.626097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.626454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.626464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.626811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.626820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.627156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.627166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-07-16 00:41:23.627512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-07-16 00:41:23.627522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.627852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.627863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.628194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.628205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.628539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.628549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.628879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.628889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.629218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.629236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.629594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.629605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.629958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.629969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.630323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.630333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.630743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.630752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.630975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.630987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.631303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.631314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.631672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.631681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.631872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.631883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.632206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.632216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.632562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.632572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.632783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.632794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.633153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.633163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.633509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.633521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.633774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.633784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.634115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.634125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.634492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.634502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.634837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.634848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.635296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.635306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.635645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.635655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.635986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.635996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.636334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.636345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.636714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.636723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.637100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.637110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.637397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.637407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.637652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.637661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.638031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.638042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.638460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.638470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.638846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.638856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.639191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.639201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.639579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.639590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.639951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.639962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.640298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.640311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.640696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.640706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.641084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.641094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.641516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.641527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.641867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-07-16 00:41:23.641876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-07-16 00:41:23.642210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.642219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.642453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.642463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.642820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.642830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.643161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.643171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.643532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.643542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.643876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.643885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.644208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.644218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.644470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.644481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.644822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.644833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.645157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.645167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.645453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.645464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.645805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.645816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.646201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.646212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.646534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.646545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.646875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.646886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.647244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.647256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.647499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.647509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.647746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.647756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.648090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.648100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.648440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.648451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.648784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.648794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.649134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.649144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.649490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.649501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.649858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.649869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.649994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.650004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.650362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.650372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.650739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.650749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.650956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.650966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.651315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.651326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.651667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.651677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.652016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.652027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.652305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.652315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.652668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.652679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-07-16 00:41:23.653029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-07-16 00:41:23.653039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.653394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.653405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.653742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.653752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.653988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.654000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.654357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.654368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.654540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.654550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.655026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.655036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.655219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.655233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.655473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.655483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.655831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.655841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.656168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.656178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.656522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.656532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.656867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.656877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.657208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.657218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.657551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.657561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.657586] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.263 [2024-07-16 00:41:23.657988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.657997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.658336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.658346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.658729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.658740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.659094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.659108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.659449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.659458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.659788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.659798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.660130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.660141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.660503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.660513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.660771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.660781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.661164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.661173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.661509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.661519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.661853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.661862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.662118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.662128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.662338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.662348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.662673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.662684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.662969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.662982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.663421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.663432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.663762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.663773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.663981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.663991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.664344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.664354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.664707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.664717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.665049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.665059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.665395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.665405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.665745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.665755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.666164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.666174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.666407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-07-16 00:41:23.666417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-07-16 00:41:23.666754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.666764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.667079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.667089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.667448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.667458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.667794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.667804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.668203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.668215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.668473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.668483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.668800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.668810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.669167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.669177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.669521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.669531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.669863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.669873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.670125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.670136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.670488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.670499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.670844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.670854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.671182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.671191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.671530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.671541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.671995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.672006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.672340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.672352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.672700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.672710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.673091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.673101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.673438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.673448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.673649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.673660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.674031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.674041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.674393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.674403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.674738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.674747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.675080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.675090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.675422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.675432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.675809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.675819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.676031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.676041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.676424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.676434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.676806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.676816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.677194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.677204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.677569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.677580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.677874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.677884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.678243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.678254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.678467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.678477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.678781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.678791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.679121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.679131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.679483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.679493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.679830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.679840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.680065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.680076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.680452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.680462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-07-16 00:41:23.680801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-07-16 00:41:23.680811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.681150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.681160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.681534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.681545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.681905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.681915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.682246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.682257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.682664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.682674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.683009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.683018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.683352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.683362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.683651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.683661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.683899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.683909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.684281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.684291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.684636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.684645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.684981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.684991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.685361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.685372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.685717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.685728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.686089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.686099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.686475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.686488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.686822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.686831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.687243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.687253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.687650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.687660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.687993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.688002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.688257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.688268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.688590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.688600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.688840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.688849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.689154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.689164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.689501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.689511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.689842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.689852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.690191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.690201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.690423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.690433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.690822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.690832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.691163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.691174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.691570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.691581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.691973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.691983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.692328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.692338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.692680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.692691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.693028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.693039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.693294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.693305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.693667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.693677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.694025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.694035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.694407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.694417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.694793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.694804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.695148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.695158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-07-16 00:41:23.695430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-07-16 00:41:23.695440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.695776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.695790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.696121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.696131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.696497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.696507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.696914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.696924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.697252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.697262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.697484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.697494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.697844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.697856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.698116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.698126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.698370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.698380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.698722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.698732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.699068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.699078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.699328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.699346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.699568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.699578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.699908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.699918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.700260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.700270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.700633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.700643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.700895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.700904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.701245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.701255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.701600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.701609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.701933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.701943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.702322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.702333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.702576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.702585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.702969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.702978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.703317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.703339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.703677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.703687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.704044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.704054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.704412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.704422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.704756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.704765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.705095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.705104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.705434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.705444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.705776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.705786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.706141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.706152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.706489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.706499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.706868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.706878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.707258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-07-16 00:41:23.707268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-07-16 00:41:23.707645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.707654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.708027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.708037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.708389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.708400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.708774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.708784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.709164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.709173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.709511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.709521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.709712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.709725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.710077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.710086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.710373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.710384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.710704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.710714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.711058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.711068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.711405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.711415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.711766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.711776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.712103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.712113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.712453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.712462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.712798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.712807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.713142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.713151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.713541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.713551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.713883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.713893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.714244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.714255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.714623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.714633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.714982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.714992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.715321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.715331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.715666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.715676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.715922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.715932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.716187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.716196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.716538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.716548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.716919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.716928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.717306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.717316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.717695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.717705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.718046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.718056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.718302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.718313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.718647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.718656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.718986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.718997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.719327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.719337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.719674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.719683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.720017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.720026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.720383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.720393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.720764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.720774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.721170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.721180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.721487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.721497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-07-16 00:41:23.721848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-07-16 00:41:23.721857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.722188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.722197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.722532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.722542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.722927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.722936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.723268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.723278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.723610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.723620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.723978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.723989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.724302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.724312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.724557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.724567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.724949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.724959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.725003] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.268 [2024-07-16 00:41:23.725035] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.268 [2024-07-16 00:41:23.725042] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.268 [2024-07-16 00:41:23.725049] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.268 [2024-07-16 00:41:23.725055] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.268 [2024-07-16 00:41:23.725292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.725303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.725197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:10.268 [2024-07-16 00:41:23.725333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:10.268 [2024-07-16 00:41:23.725496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.725506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.725465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:10.268 [2024-07-16 00:41:23.725495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:10.268 [2024-07-16 00:41:23.725843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.725853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.726244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.726254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.726667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.726677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.727018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.727027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.727364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.727376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.727594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.727604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.727884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.727894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.728263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.728274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.728545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.728555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.728916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.728926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.729267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.729277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.729605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.729615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.729946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.729956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.730288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.730298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.730551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.730560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.730747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.730757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.731170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.731180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.731543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.731554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.731898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.731908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.732238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.732248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.732591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.732600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.732714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.732723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.733122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.733132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.733505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.733516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.733853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.733863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.734194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.734204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-07-16 00:41:23.734439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-07-16 00:41:23.734449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.734714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.734723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.735083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.735093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.735484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.735494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.735870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.735880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.736219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.736233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.736490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.736500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.736734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.736744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.736975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.736986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.737326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.737337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.737585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.737595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.737975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.737984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.738323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.738333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.738717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.738727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.739060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.739070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.739322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.739332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.739739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.739750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.740128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.740138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.740254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.740264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.740599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.740611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.740938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.740948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.741140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.741151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.741469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.741479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.741844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.741854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.742132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.742141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.742396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.742406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.742625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.742635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.743005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.743015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.743347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.743358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.743736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.743746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.743963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.743973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.744182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.744192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.744513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.744522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.744865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.744875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.745206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.745216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.745572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.745583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.745951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.745961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.746345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.746355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.746697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.746706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.747080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.747090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.747390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.747401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.747792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-16 00:41:23.747803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-16 00:41:23.748002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.748012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.748224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.748239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.748646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.748657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.749051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.749061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.749405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.749419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.749777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.749787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.750019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.750029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.750412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.750422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.750811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.750820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.751152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.751162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.751501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.751511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.751848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.751858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.752061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.752071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.752346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.752357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.752727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.752737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.752932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.752942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.753279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.753288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.753642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.753652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.753986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.753997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.754334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.754344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.754583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.754592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.754962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.754972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.755333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.755343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.755564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.755574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.755803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.755813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.756162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.756171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.756433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.756443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.756822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.756832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.757115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.757124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.757487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.757496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.757828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.757837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.758031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.758041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.758325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.758336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.758713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.758723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.759115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.759125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.759489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.759499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.759843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.759853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.760194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.760203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.760550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.760560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.760899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.760909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.761258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.761269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-16 00:41:23.761533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-16 00:41:23.761543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.761885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.761895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.762248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.762258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.762519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.762529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.762597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.762608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.762855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.762864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.763057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.763066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.763430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.763440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.763687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.763697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.764053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.764063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.764404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.764414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.764615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.764624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.764994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.765005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.765268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.765278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.765593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.765602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.765977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.765987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.766200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.766211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.766577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.766587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.766949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.766958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.767367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.767377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.767582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.767591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.767925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.767934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.768310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.768320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.768659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.768668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.768913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.768922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.769120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.769129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.769432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.769443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.769806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.769815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.770024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.770033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.770348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.770358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.770574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.770583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.770949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.770959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.771317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.771327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.771534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.771543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.771924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-16 00:41:23.771934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-16 00:41:23.772334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.772344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.772684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.772694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.773027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.773036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.773370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.773379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.773724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.773734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.774063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.774073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.774217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.774227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.774409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.774419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.774663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.774673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.775028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.775037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.775386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.775396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.775758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.775768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.776032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.776042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.776391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.776401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.776754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.776763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.777151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.777160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.777512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.777522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.777884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.777894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.778228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.778243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.778582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.778592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.778969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.778978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.779226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.779242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.779450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.779459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.779837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.779847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.780182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.780191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.780535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.780545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.780760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.780769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.781093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.781103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.781323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.781333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.781688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.781699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.782054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.782064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.782421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.782431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.782768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.782777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.783180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.783189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.783537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.783547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.783955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.783965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.784177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.784187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.784409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.784421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.784841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.784851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.785184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.785194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.785590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-16 00:41:23.785600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-16 00:41:23.785955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.785964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.786301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.786310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.786661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.786670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.787009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.787018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.787384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.787393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.787749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.787759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.788091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.788101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.788461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.788471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.788825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.788835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.789191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.789200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.789557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.789567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.789909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.789919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.790134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.790143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.790319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.790328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.790606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.790615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.790962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.790971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.791351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.791361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.791701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.791711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.792044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.792055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.792412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.792422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.792730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.792739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.793072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.793082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.793304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.793314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.793661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.793670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.794027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.794037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.794186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.794196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.794517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.794528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.794920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.794929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.795254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.795264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.795651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.795661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.796003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.796012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.796348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.796358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.796799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.796808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.797140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.797150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.797389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.797400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.797755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.797764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.798103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.798112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.798451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.798463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.798850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.798860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.799202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.799212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.799629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.799639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.800072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-16 00:41:23.800081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-16 00:41:23.800426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.800436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.800774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.800784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.801163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.801173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.801539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.801550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.801907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.801916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.802185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.802195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.802417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.802427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.802780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.802789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.803204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.803213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.803603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.803613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.803862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.803872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.804125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.804135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.804389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.804399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.804749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.804758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.805089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.805098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.805435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.805446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.805682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.805691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.805919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.805928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.806151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.806160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.806500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.806510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.806842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.806852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.807238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.807249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.807500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.807512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.807757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.807767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.807979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.807988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.808263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.808283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.808657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.808666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.808951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.808960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.809320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.809330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.809665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.809674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.810010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.810019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.810353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.810362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.810704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.810714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.811099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.811109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.811545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.811555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.811905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.811914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.812267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.812277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.812643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.812652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.812984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.812993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.813078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.813087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.813412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.813422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-16 00:41:23.813765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-16 00:41:23.813775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.814110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.814119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.814312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.814323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.814651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.814660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.814993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.815002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.815197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.815206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.815551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.815562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.815895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.815905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.816236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.816245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.816312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.816322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.816667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.816677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.817046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.817056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.817590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.817600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.817928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.817938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.818315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.818325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.818700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.818709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.819053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.819063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.819258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.819269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.819627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.819636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.819705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.819714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.820033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.820042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.820241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.820251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.820645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.820657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.820996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.821005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.821389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.821399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.821766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.821775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.822113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.822122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.822473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.822484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.822689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.822699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.822757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.822767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.822965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.822976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.823332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.823342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.823575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.823585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.823951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.823961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.824242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.824252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.824553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.824562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-16 00:41:23.824930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-16 00:41:23.824939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.825082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.825092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.825494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.825504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.825839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.825849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.826033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.826042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.826394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.826403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.826819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.826829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.827163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.827173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.827346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.827356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.827669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.827678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.828018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.828028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.828393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.828403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.828624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.828634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.828941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.828953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.829319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.829330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.829652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.829662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.830030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.830040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.830099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.830107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.830436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.830446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.830775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.830784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.830958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.830968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.831170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.831179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.831285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.831295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.831651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.831660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.831990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.831999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.832347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.832357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.832541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.832550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.832944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.832954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.833295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.833305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.833707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.833716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.833970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.833980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.834176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.834186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.834423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.834433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.834806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.834816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.835191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.835202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.835399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.835409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.835630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.835640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.835975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.835985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.836298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.836309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.836678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.836688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.836844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.836855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.837185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-16 00:41:23.837194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-16 00:41:23.837539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.837549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.837880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.837889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.838221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.838243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.838420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.838430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.838763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.838772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.839131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.839141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.839480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.839490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.839826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.839836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.840075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.840085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.840295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.840305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.840489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.840499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.840828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.840838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.841170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.841183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.841537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.841547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.841876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.841886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.842215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.842225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.842568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.842578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.842909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.842918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.843127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.843136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.843314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.843324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.843667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.843677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.844012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.844021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.844207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.844218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.844431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.844441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.844796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.844805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.844988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.844997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.845366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.845377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.845807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.845816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.846150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.846160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.846488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.846498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.846828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.846838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.847033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.847043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.847221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.847235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.847602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.847611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.847950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.847960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.848296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.848306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.848674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.848683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.848878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.848887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.849096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.849105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.849518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.849531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.849872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.849881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-16 00:41:23.850065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-16 00:41:23.850075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.850452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.850462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.850645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.850654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.850874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.850884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.851207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.851217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.851588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.851598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.851972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.851981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.852322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.852332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.852665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.852675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.853056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.853065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.853398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.853408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.853658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.853668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.854027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.854036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.854368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.854378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.854744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.854753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.855011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.855021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.855390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.855401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.855746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.855755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.856052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.856061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.856388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.856398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.856749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.856759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.857159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.857169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.857387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.857397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.857733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.857743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.857971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.857983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.858393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.858403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.858755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.858764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.858915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.858931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.859307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.859317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.859533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.859542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.859995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.860004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.860337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.860347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.860535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.860545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.860926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.860936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.861292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.861302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.861656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.861665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.861997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.862006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.862271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.862281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.862632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.862642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.863018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.863029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.863360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.863369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.863698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-16 00:41:23.863707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-16 00:41:23.863901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.863911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.863970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.863979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.864381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.864392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.864632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.864643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.864992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.865003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.865357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.865367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.865589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.865598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.865923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.865932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.866264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.866274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.866627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.866636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.866977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.866987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.867224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.867238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.867495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.867505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.867708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.867717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.868044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.868053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.868397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.868407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.868777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.868787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.868928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.868937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.869325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.869335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.869731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.869741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.870071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.870080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.870267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.870278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.870494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.870504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.870816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.870826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.871158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.871169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.871465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.871475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.871836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.871845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.872176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.872185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.872386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.872396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.872698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.872707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.873075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.873085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.873336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.873347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.873545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.873554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.873899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.873909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.874286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.874295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-16 00:41:23.874674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-16 00:41:23.874684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-16 00:41:23.875020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-16 00:41:23.875029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-16 00:41:23.875286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-16 00:41:23.875296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-16 00:41:23.875514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-16 00:41:23.875524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-16 00:41:23.875908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-16 00:41:23.875919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.876269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.876281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.876554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.876564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.876904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.876914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.877273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.877283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.877666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.877676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.878010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.878020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.878404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.878414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.878757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.878767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.878907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.878917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.879140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.879150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.879504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.879514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.879916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.879927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.880305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.880315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.880620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.880629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.881015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.881025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.881398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.881408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.881643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.881653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.881900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.881910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.882292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.882302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.882658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.882669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.882942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.882952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.883322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.883331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.883568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.883577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.883837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.883846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.551 qpair failed and we were unable to recover it. 00:30:10.551 [2024-07-16 00:41:23.884210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.551 [2024-07-16 00:41:23.884219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.884554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.884566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.884903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.884912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.885254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.885265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.885618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.885628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.885959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.885970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.886348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.886358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.886710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.886720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.886981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.886991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.887346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.887356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.887731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.887741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.888071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.888080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.888412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.888423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.888779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.888789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.889098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.889108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.889449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.889459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.889806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.889817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.890149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.890159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.890530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.890540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.890877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.890886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.891134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.891143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.891371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.891383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.891745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.891755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.892112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.892121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.892309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.892319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.892539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.892548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.892965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.892975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.893167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.893177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.893508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.893518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.894008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.894019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.894383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.894394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.894751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.894761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.895094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.552 [2024-07-16 00:41:23.895104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.552 qpair failed and we were unable to recover it. 00:30:10.552 [2024-07-16 00:41:23.895489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.895499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.895851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.895860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.896197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.896206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.896543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.896553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.896894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.896904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.897150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.897161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.897545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.897556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.897883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.897893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.898101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.898111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.898317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.898327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.898731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.898741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.899077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.899087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.899285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.899295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.899647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.899656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.899849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.899858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.900191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.900200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.900531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.900541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.900916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.900926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.901292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.901302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.901593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.901603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.901943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.901953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.902199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.902209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.902566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.902576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.902831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.902841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.903194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.903204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.903416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.903426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.903766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.903776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.904147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.904157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.904464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.904474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.904823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.904832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.905170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.905180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.905249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.905259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.553 [2024-07-16 00:41:23.905610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.553 [2024-07-16 00:41:23.905620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.553 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.905955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.905966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.906345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.906356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.906632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.906642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.906828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.906840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.907197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.907206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.907548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.907558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.907890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.907899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.908233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.908243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.908410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.908420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.908550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.908562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.908861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.908871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.909206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.909216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.909558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.909568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.909798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.909808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.910154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.910165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.910335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.910344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.910676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.910685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.911017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.911028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.911365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.911375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.911651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.911661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.912019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.912028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.912387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.912397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.912743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.912753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.912936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.912946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.913154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.913163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.913492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.913502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.913880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.913890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.914101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.914110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.914446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.914456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.914796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.914805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.914946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.914955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.915304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.915314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.554 qpair failed and we were unable to recover it. 00:30:10.554 [2024-07-16 00:41:23.915519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.554 [2024-07-16 00:41:23.915529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.915890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.915900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.916381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.916391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.916755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.916765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.917117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.917127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.917311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.917321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.917499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.917509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.917743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.917754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.918109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.918119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.918530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.918540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.918730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.918740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.918835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.918844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.919038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.919050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.919346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.919356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.919710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.919719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.919903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.919913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.920283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.920293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.920628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.920637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.920973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.920984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.921333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.921343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.921678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.921689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.922046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.922056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.922406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.922416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.922751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.922762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.923119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.923130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.923515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.923524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.923840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.923850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.924214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.924223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.924579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.924589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.924921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.924932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.925277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.925287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.925621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.925630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.925774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.555 [2024-07-16 00:41:23.925783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.555 qpair failed and we were unable to recover it. 00:30:10.555 [2024-07-16 00:41:23.926309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.926400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.926904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.926940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.927519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.927606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8a4000b90 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.927993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.928004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.928214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.928223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.928520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.928530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.928907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.928919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.929418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.929455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.929815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.929828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.930162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.930172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.930408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.930419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.930790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.930801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.931049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.931059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.931408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.931418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.931778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.931787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.932130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.932140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.932498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.932508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.932852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.932861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.933225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.933239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.933580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.933590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.933786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.933798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.934165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.934174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.934515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.934525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.934869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.934879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.556 [2024-07-16 00:41:23.935118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.556 [2024-07-16 00:41:23.935127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.556 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.935499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.935509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.935857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.935866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.936251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.936260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.936677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.936686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.936883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.936894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.937237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.937247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.937587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.937597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.937954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.937964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.938257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.938267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.938467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.938476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.938885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.938895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.939209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.939219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.939645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.939655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.940041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.940050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.940373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.940383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.940606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.940615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.940948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.940957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.941172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.941181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.941523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.941534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.941902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.941912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.942270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.942280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.942625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.942634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.942870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.942884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.943234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.943243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.943589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.943599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.944017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.944027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.944446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.944459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.944654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.944663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.944874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.944884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.945168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.945177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.945499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.945509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.945770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.945780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.557 [2024-07-16 00:41:23.946181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.557 [2024-07-16 00:41:23.946191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.557 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.946475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.946485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.946740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.946750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.947125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.947134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.947460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.947469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.947817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.947827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.948240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.948250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.948446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.948456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.948776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.948786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.948995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.949004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.949493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.949503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.949848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.949858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.950185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.950195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.950541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.950551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.950895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.950904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.951293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.951303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.951567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.951577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.951757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.951768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.952122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.952132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.952525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.952536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.952972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.952982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.953188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.953197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.953610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.953620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.953865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.953875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.954235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.954245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.954427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.954436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.954851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.954861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.955191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.955200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.955589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.955599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.955933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.955943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.956153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.956162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.956321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.956330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.956540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.956549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.558 qpair failed and we were unable to recover it. 00:30:10.558 [2024-07-16 00:41:23.956915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.558 [2024-07-16 00:41:23.956925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.957311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.957321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.957657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.957667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.958034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.958044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.958424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.958434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.958791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.958800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.959130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.959139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.959333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.959343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.959696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.959705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.960036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.960046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.960310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.960320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.960670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.960680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.960882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.960893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.961265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.961275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.961594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.961604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.961788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.961797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.961999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.962009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.962366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.962375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.962716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.962726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.963053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.963063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.963269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.963279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.963481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.963490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.963826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.963836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.964215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.964224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.964560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.964569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.964747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.964760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.965050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.965059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.965397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.965407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.965589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.965598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.965904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.965914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.966245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.966255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.966598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.966607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.966950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.966959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.967296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.967306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.967652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.967661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.967999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.968009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.559 [2024-07-16 00:41:23.968346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.559 [2024-07-16 00:41:23.968357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.559 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.968551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.968560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.968880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.968890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.969325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.969335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.969662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.969671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.970010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.970020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.970311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.970321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.970690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.970700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.971038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.971048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.971296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.971307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.971671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.971680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.972018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.972028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.972362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.972373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.972688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.972697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.972929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.972938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.973298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.973308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.973648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.973659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.973913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.973923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.974269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.974280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.974644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.974654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.974992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.975001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.975210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.975219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.975402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.975411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.975729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.975738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.976071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.976080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.976365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.976375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.976718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.976727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.977135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.977145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.977495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.977505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.977700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.977710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.977927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.977937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.978262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.978271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.978496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.978505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.978938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.978948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.979305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.979315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.979775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.979784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.980138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.980147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.980488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.980498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.980828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.980838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.981221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.981236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.981608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.981618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.981978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.981988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.982346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.982355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.982689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.982699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.982884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.982894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.983240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.983249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.983660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.983669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.984001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.560 [2024-07-16 00:41:23.984010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.560 qpair failed and we were unable to recover it. 00:30:10.560 [2024-07-16 00:41:23.984347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.984356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.984600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.984610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.985001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.985010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.985341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.985351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.985684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.985693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.985913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.985924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.986157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.986166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.986499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.986508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.986765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.986774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.987135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.987148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.987353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.987363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.987582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.987591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.988014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.988024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.988297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.988308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.988518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.988527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.988885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.988894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.989272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.989282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.989697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.989707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.990038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.990047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.990257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.990266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.990447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.990456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.990658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.990667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.990863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.990873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.991247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.991258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.991592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.991601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.991938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.991947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.992135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.992145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.992516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.992526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.992733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.992743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.993100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.993110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.993407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.993417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.993770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.993781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.994085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.994094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.994456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.994466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.994711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.994720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.994961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.994970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.995347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.995359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.995569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.995578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.995954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.995964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.996171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.996181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.996241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.996250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.996550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.561 [2024-07-16 00:41:23.996559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.561 qpair failed and we were unable to recover it. 00:30:10.561 [2024-07-16 00:41:23.996913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.996922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.997105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.997114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.997490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.997500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.997834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.997844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.998180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.998191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.998637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.998646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.998995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.999004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.999203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.999212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.999489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.999499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:23.999864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:23.999873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.000091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.000100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.000444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.000454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.000639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.000649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.001014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.001023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.001398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.001407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.001742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.001752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.002126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.002135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.002345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.002355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.002746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.002757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.003096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.003105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.003438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.003448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.003839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.003848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.004183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.004193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.004622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.004632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.004966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.004975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.005306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.005316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.005650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.005659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.006070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.006079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.006273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.006283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.006631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.006641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.006979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.006990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.007324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.007335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.007521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.007531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.007943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.007952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.008282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.008292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.008550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.008565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.008869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.008879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.009258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.009268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.009601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.009611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.009991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.010001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.010155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.010164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.010430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.010441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.010629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.010640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.011026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.011036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.011371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.011381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.011779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.011788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.011984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.011995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.012344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.012354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.562 qpair failed and we were unable to recover it. 00:30:10.562 [2024-07-16 00:41:24.012697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.562 [2024-07-16 00:41:24.012706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.012897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.012907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.013091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.013101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.013168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.013177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.013542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.013552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.013882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.013891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.014224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.014237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.014427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.014437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.014879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.014889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.015255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.015265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.015604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.015614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.015974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.015984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.016250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.016261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.016474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.016484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.016843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.016853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.017209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.017219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.017641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.017652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.017989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.017999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.018210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.018219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.018580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.018590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.018924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.018933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.019264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.019273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.019611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.019621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.019963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.019974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.020236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.020247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.020476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.020486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.020704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.020713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.021034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.021043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.021402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.021412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.021769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.021779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.022109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.022119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.022465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.022475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.022863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.022873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.023240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.023251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.023616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.023626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.023861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.023871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.024074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.024083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.024290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.024301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.024647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.024656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.024992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.025002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.025340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.025350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.025696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.025706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.026040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.026049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.026107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.026116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.026441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.026451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.026783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.026793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.027151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.027161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.027520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.563 [2024-07-16 00:41:24.027530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.563 qpair failed and we were unable to recover it. 00:30:10.563 [2024-07-16 00:41:24.027863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.027872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.028069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.028080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.028439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.028450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.028656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.028666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.029029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.029039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.029422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.029432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.029632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.029641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.030005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.030016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.030235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.030246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.030608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.030618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.030718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.030729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.031090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.031100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.031447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.031457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.031665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.031674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.032039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.032049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.032179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.032188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.032540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.032550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.032909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.032919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.033256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.033266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.033619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.033628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.033963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.033974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.034301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.034311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.034565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.034576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.034933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.034942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.035272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.035282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.035582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.035592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.035921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.035930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.036263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.036272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.036630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.036641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.036977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.036987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.037327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.037337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.037672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.037682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.037865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.037875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.038243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.038253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.038584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.038594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.038923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.038932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.039267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.039277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.039693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.039702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.040077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.040087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.040419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.040429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.040761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.040771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.041025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.041035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.041245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.041255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.041473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.041482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.041814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.041824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.042159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.042169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.042400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.042409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.564 qpair failed and we were unable to recover it. 00:30:10.564 [2024-07-16 00:41:24.042788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.564 [2024-07-16 00:41:24.042797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.042981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.042991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.043291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.043301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.043670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.043680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.044014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.044024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.044378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.044388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.044636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.044645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.045011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.045020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.045235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.045246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.045599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.045608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.045973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.045982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.046172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.046182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.046412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.046422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.046795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.046805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.047139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.047149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.047530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.047542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.047872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.047881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.048213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.048222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.048409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.048420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.048773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.048783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.049069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.049078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.049255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.049265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.049678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.049688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.049911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.049921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.050278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.050288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.050780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.050790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.051136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.051146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.051218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.051228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.051511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.051523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.051943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.051952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.052164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.052173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.052503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.052514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.052866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.052875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.053068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.053077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.053397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.053407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.053805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.053815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.054147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.054157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.054495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.054506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.054845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.054855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.055196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.565 [2024-07-16 00:41:24.055205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.565 qpair failed and we were unable to recover it. 00:30:10.565 [2024-07-16 00:41:24.055616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.055626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.055999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.056010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.056370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.056380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.056788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.056798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.057044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.057053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.057269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.057278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.057637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.057647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.057832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.057841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.058052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.058061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.058387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.058397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.058580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.058590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.059043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.059052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.059433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.059443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.059778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.059788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.059996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.060005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.060208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.060218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.060568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.060578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.060770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.060780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.061148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.061159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.061510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.061520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.061851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.061862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.062217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.062227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.062562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.062572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.062913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.062923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.063256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.063266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.063608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.063619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.063853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.063863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.064207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.064218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.064554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.064564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.064897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.064908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.065245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.065255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.065668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.065678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.066014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.066025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.066357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.066367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.066700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.066710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.067065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.067075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.067433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.067443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.067797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.067808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.068011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.068021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.068375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.068386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.068640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.068650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.068842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.068851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.069063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.069073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.069327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.069337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.069549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.069559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.069785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.069794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.070145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.070154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.070559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.070570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.070899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.566 [2024-07-16 00:41:24.070909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.566 qpair failed and we were unable to recover it. 00:30:10.566 [2024-07-16 00:41:24.071239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.071250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.071668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.071677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.072012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.072021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.072283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.072293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.072650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.072660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.072988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.072999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.073333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.073344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.073602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.073616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.073967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.073977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.074318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.074328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.074539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.074550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.074729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.074738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.074937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.074946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.075306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.075317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.075573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.075583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.076022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.076031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.076376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.076386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.076752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.076762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.077097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.077107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.077461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.077471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.077811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.077821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.078154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.078164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.078508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.078518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.078871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.078881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.079239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.079250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.079587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.079596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.079928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.079937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.080270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.080280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.080608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.080617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.081024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.081035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.081288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.081299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.081649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.081659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.082083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.082093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.082437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.082447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.082799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.082809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.083023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.083034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.083476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.083486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.083817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.083826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.084171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.084180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.084567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.084578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.084932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.084943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.085301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.085311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.085620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.085629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.086073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.086082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.086430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.086440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.086642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.086651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.087020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.087030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.087413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.567 [2024-07-16 00:41:24.087424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.567 qpair failed and we were unable to recover it. 00:30:10.567 [2024-07-16 00:41:24.087774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.087786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.087974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.087984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.088357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.088368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.088791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.088800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.088981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.088990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.089355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.089365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.089577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.089588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.089815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.089825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.090186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.090196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.090603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.090613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.090819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.090829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.091015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.091025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.091348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.091359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.091701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.091710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.091898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.091907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.092106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.092115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.092342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.092352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.092693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.092702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.093034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.093044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.093400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.093411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.093587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.093598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.093805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.093815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.094146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.094155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.094330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.094341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.094664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.094673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.094855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.094864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.095134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.095144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.095492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.095504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.095879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.095889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.096227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.096248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.096648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.096657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.096997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.097006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.097337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.097347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.097682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.097693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.097900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.097911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.098262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.098272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.098619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.098629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.098835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.098845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.099029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.099039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.099368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.099377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.099715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.568 [2024-07-16 00:41:24.099725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.568 qpair failed and we were unable to recover it. 00:30:10.568 [2024-07-16 00:41:24.099935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.099944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.100330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.100340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.100683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.100692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.101052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.101061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.101406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.101416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.101479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.101490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.101755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.101765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.102100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.102109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.102440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.102450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.102781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.102790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.103125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.103135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.103342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.103352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.103721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.103730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.104077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.104087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.104430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.104440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.104663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.104673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.105028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.105038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.105367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.105377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.105755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.105764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.106106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.106116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.106485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.106495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.106829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.106839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.107045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.107055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.107428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.107438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.107609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.107618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.108060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.108071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.108419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.108429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.108814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.108825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.109163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.109172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.109541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.109551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.109736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.109745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.110023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.110033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.110387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.110397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.110831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.110841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.111149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.111159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.111432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.111442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.111636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.111646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.112013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.112022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.112312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.112322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.112690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.112699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.113029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.113039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.113393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.113404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.113599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.113609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.113840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.113849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.114207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.114216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.569 [2024-07-16 00:41:24.114542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.569 [2024-07-16 00:41:24.114552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.569 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.114734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.114743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.115104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.115113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.115444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.115453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.115638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.115649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.116023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.116034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.116421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.116432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.116773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.116783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.117115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.117124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.117486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.117497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.117832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.117842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.118173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.118182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.118380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.118390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.118754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.118764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.119105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.119114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.119328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.119338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.119550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.119559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.119861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.119870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.120214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.120223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.120476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.120486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.120814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.120823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.121161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.121171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.121511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.121522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.121730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.121740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.122125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.122135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.122488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.122497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.122683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.122692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.122910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.122920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.123198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.123208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.123619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.123628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.123828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.123837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.124203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.124212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.124423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.124434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.124751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.124760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.125106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.125116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.125532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.125542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.125747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.125757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.126023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.126033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.126373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.126383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.126710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.126720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.126869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.126878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.127127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.127136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.127331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.127341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.127686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.127695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.127954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.127964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.128224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.128238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.128612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.128621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.128923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.128932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.129299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.570 [2024-07-16 00:41:24.129310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.570 qpair failed and we were unable to recover it. 00:30:10.570 [2024-07-16 00:41:24.129655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.129664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.130000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.130012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.130213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.130222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.130593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.130603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.130828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.130838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.131045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.131055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.131427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.131437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.131637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.131646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.132013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.132021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.132367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.132377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.132689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.132698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.132888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.132898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.133311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.133322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.133744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.133754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.134170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.134179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.134575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.134585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.134884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.134896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.135234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.135245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.135665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.135676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.136079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.136090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.136552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.136562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.136963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.136972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.137302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.137312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.137649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.137658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.137996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.138006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.138385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.138394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.138732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.138741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.139126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.139136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.139502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.139512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.139897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.139907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.140253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.140264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.140590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.140599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.140793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.140804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.141178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.141188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.141417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.141427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.141789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.141799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.141981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.141991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.142188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.142197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.142507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.142517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.142743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.142752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.142956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.142967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.143184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.143194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.143557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.143567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.143821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.143831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.144199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.144208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.144423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.144433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.144806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.144815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.571 [2024-07-16 00:41:24.145198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.571 [2024-07-16 00:41:24.145207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.571 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.145435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.145445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.145837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.145847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.146191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.146200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.146444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.146454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.146791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.146801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.147187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.147197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.147582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.147592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.147791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.147800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.148131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.148141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.148507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.148517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.148859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.148868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.149114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.149123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.149376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.149386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.149731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.149741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.149980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.149990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.150357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.150367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.150739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.150748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.151001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.151011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.151387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.151397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.151843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.151852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.152058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.152067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.152415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.152427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.152771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.152780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.153112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.153122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.153328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.153337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.153692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.153701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.154035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.154046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.154417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.154427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.154664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.154673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.155005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.155015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.155379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.155389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.155574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.155583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.156002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.156012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.156152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.156162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.156535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.156545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.156802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.156811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.157184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.157193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.157547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.157557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.157888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.157898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.158182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.158192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.158383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.158393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.572 qpair failed and we were unable to recover it. 00:30:10.572 [2024-07-16 00:41:24.158620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.572 [2024-07-16 00:41:24.158629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.158950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.158959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.159205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.159215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.159581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.159592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.159780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.159790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.160132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.160142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.160521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.160531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.160894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.160903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.161239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.161249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.161658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.161667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.161852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.161861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.162168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.162179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.162402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.162412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.162621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.162631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.162986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.162996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.163334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.163343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.163687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.163696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.164026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.164035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.164376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.164386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.164747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.164757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.164816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.164825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.165140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.165152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.165507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.165518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.165871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.165880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.166222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.166237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.166592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.166602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.166946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.166955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.167287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.167297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.167651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.167660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.167919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.167928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.168289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.168299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.168697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.168707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.169075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.169085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.169419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.169430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.169620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.169629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.170062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.170071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.170411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.170421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.170757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.170766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.171096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.171107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.573 [2024-07-16 00:41:24.171461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.573 [2024-07-16 00:41:24.171472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.573 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.171824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.171834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.172172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.172182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.172391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.172401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.172774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.172784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.173117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.173127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.173526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.173537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.173800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.173809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.174185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.847 [2024-07-16 00:41:24.174195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.847 qpair failed and we were unable to recover it. 00:30:10.847 [2024-07-16 00:41:24.174473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.174487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.174763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.174773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.174845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.174856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.175172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.175182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.175537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.175547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.175965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.175974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.176182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.176193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.176510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.176521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.176883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.176892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.177222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.177237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.177446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.177456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.177689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.177699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.178051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.178060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.178391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.178401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.178639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.178649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.179020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.179030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.179385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.179395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.179726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.179736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.180079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.180088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.180410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.180420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.180792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.180802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.181010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.181020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.181410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.181421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.181760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.181770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.181966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.181976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.182171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.182180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.182534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.182544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.182738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.182748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.183118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.183128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.183486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.183496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.183829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.183839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.184218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.184228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.184634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.184644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.184962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.184972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.185307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.185317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.185725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.185735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.186065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.186074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.186408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.186419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.186685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.186695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.187047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.187058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.848 [2024-07-16 00:41:24.187426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.848 [2024-07-16 00:41:24.187436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.848 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.187786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.187799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.188180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.188189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.188604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.188613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.188806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.188816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.189170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.189180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.189387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.189397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.189771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.189780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.190113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.190123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.190486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.190496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.190701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.190710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.191063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.191073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.191243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.191253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.191532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.191542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.191736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.191746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.191959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.191969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.192028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.192038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.192363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.192372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.192707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.192716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.192908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.192918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.193299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.193309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.193671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.193681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.194017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.194026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.194222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.194238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.194592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.194603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.194952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.194961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.195292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.195302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.195643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.195652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.195987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.196000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.196395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.196405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.196758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.196768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.197013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.197023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.197375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.197385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.197741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.197751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.198081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.198091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.198241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.198252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.198533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.198543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.198745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.198755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.199124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.199133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.199380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.199389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.199579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.849 [2024-07-16 00:41:24.199588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.849 qpair failed and we were unable to recover it. 00:30:10.849 [2024-07-16 00:41:24.199962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.199972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.200180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.200190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.200382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.200392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.200741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.200750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.201113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.201122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.201546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.201556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.201964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.201975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.202404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.202414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.202792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.202802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.203132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.203142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.203278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.203288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.203579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.203590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.203810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.203819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.204151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.204160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.204514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.204524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.204811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.204821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.205183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.205192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.205445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.205455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.205843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.205853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.206195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.206205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.206600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.206609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.206860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.206869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.207209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.207218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.207571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.207581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.207914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.207924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.208275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.208286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.208620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.208629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.208824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.208834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.209241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.209253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.209629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.209639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.209978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.209988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.210321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.210331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.210670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.210679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.210866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.210881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.211271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.211281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.211607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.211616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.211953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.211962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.212209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.212218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.212574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.212585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.212802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.212813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.850 [2024-07-16 00:41:24.213193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.850 [2024-07-16 00:41:24.213202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.850 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.213553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.213564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.213899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.213908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.214245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.214254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.214605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.214615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.214682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.214690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.214905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.214915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.215135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.215144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.215389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.215399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.215749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.215759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.216114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.216123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.216355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.216364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.216713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.216723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.217075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.217084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.217417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.217428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.217812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.217823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.218198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.218208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.218577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.218587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.218929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.218939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.219360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.219370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.219747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.219757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.220107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.220116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.220467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.220477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.220760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.220769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.221110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.221119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.221528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.221538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.221607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.221617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.221687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.221696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.222046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.222056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.222413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.222423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.222848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.222857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.223190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.223201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.223550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.223561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.223915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.223925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.224260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.224270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.224634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.224644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.224902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.224911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.225266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.225276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.225624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.225633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.225957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.225966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.226201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.226210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.851 [2024-07-16 00:41:24.226579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.851 [2024-07-16 00:41:24.226588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.851 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.226931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.226941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.227277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.227287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.227665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.227675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.227867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.227876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.228246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.228256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.228421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.228431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.228756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.228765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.229141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.229150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.229406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.229417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.229706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.229716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.230074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.230084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.230414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.230424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.230764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.230774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.230987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.230997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.231304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.231316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.231664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.231674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.232081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.232091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.232422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.232432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.232774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.232784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.232837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.232846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.233165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.233175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.233371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.233381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.233746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.233756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.233814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.233823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.234140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.234150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.234493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.234503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.234886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.234896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.235047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.235057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.235242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.235252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.235625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.235635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.236046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.236056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.236264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.852 [2024-07-16 00:41:24.236274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.852 qpair failed and we were unable to recover it. 00:30:10.852 [2024-07-16 00:41:24.236665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.236674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.236928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.236938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.237286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.237296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.237660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.237669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.237875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.237885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.238248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.238258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.238459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.238469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.238678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.238688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.238967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.238977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.239335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.239345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.239678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.239688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.240021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.240031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.240367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.240377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.240645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.240655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.240997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.241006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.241339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.241349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.241697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.241707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.242079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.242090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.242447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.242457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.242811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.242820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.243032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.243041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.243371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.243381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.243831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.243841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.244178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.244188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.244534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.244543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.244878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.244887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.245220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.245234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.245582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.245591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.245911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.245920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.246108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.246118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.246493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.246504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.246857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.246868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.247224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.247237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.247585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.247595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.247929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.247938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.248203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.248213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.248581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.248591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.248827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.248837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.249216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.249226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.249649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.249659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.249871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.853 [2024-07-16 00:41:24.249881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.853 qpair failed and we were unable to recover it. 00:30:10.853 [2024-07-16 00:41:24.250107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.250117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.250538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.250549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.250882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.250892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.251073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.251082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.251391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.251401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.251734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.251743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.252075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.252085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.252436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.252446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.252787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.252797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.253138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.253149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.253499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.253509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.253705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.253715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.254083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.254092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.254299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.254309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.254669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.254679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.255013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.255022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.255356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.255365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.255702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.255711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.256042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.256051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.256385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.256396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.256754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.256765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.257020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.257030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.257377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.257387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.257756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.257765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.258104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.258113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.258313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.258324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.258672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.258683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.258879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.258889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.259259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.259269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.259586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.259596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.259938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.259947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.260140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.260151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.260506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.260517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.260858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.260868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.260926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.260935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.261257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.261268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.261593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.261603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.261812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.261822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.262059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.262069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.262296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.262306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.262662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.262671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.854 qpair failed and we were unable to recover it. 00:30:10.854 [2024-07-16 00:41:24.262878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.854 [2024-07-16 00:41:24.262887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.263246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.263256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.263617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.263626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.263816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.263827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.264049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.264058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.264476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.264487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.264887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.264897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.265256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.265266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.265625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.265635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.265967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.265977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.266299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.266309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.266660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.266670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.266854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.266863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.267243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.267254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.267496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.267506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.267846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.267856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.268185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.268194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.268530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.268541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.268829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.268839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.269210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.269220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.269562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.269572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.269817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.269827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.270184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.270194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.270575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.270586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.270943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.270953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.271117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.271127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.271481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.271491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.271656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.271667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.271980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.271990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.272350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.272360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.272694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.272703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.273037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.273046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.273379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.273390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.273576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.273586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.273951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.273962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.274317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.274327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.274687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.274699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.275108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.275117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.275448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.275459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.275696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.275706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.276079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.276425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.855 [2024-07-16 00:41:24.276436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.855 qpair failed and we were unable to recover it. 00:30:10.855 [2024-07-16 00:41:24.276646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.276656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.276900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.276911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.277263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.277272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.277448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.277458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.277872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.277882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.278220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.278234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.278559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.278569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.278796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.278806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.279134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.279144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.279383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.279394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.279641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.279651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.280000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.280009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.280338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.280348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.280681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.280691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.281045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.281055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.281450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.281460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.281832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.281842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.282175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.282185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.282485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.282495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.282878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.282888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.283247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.283257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.283391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.283401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.283750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.283760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.284136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.284145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.284498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.284508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.284848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.284858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.285191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.285201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.285386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.285396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.285595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.285604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.285944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.285954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.286290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.286300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.286684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.286694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.287049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.287060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.856 [2024-07-16 00:41:24.287128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.856 [2024-07-16 00:41:24.287138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.856 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.287513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.287523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.287860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.287872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.288282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.288292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.288669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.288679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.289022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.289032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.289238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.289249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.289643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.289653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.289982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.289992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.290235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.290245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.290600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.290610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.291059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.291069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.291561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.291599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.292005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.292018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.292435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.292472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.292683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.292695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.293048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.293058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.293344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.293356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.293607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.293618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.294007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.294018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.294354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.294365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.294653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.294663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.294843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.294852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.295269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.295279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.295514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.295524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.295904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.295914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.296129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.296140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.296446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.296456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.296818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.296828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.297190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.297202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.297568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.297578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.297919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.297929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.298357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.298367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.298724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.298734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.299064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.299074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.299270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.299428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.299438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.299769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.299779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.857 [2024-07-16 00:41:24.299975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.857 [2024-07-16 00:41:24.299986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.857 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.300206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.300217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.300580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.300590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.300847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.300857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.301217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.301228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.301619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.301630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.301988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.302000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.302357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.302367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.302704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.302714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.303051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.303061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.303397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.303407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.303580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.303590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.304006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.304017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.304426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.304436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.304835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.304844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.305167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.305176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.305359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.305369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.305432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.305441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.305768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.305778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.305960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.305970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.306347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.306357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.306715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.306725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.307061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.307071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.307396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.307407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.307609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.307619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.307995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.308004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.308344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.308353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.308539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.308548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.308916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.308926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.309149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.309159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.309510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.309519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.309703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.309712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.310076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.310087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.310312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.310322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.310730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.310740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.311073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.311083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.311414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.311424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.311770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.311779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.311978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.311988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.312223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.312238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.312476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.312485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-07-16 00:41:24.312832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-07-16 00:41:24.312842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.313043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.313052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.313431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.313440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.313667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.313677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.314049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.314058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.314199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.314209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.314573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.314583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.314789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.314798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.315029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.315038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.315440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.315453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.315573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.315582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.315911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.315921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.316159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.316169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.316391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.316401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.316771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.316780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.317112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.317122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.317493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.317504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.317708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.317719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.318149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.318161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.318380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.318390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.318754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.318763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.319100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.319109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.319294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.319305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.319512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.319521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.319715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.319726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.320101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.320111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.320492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.320502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.320835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.320844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.321172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.321183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.321527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.321538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.321883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.321893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.322192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.322202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.322422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.322432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.322632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.322642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.323078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.323087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.323420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.323431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.323643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.323653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.324053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.324063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.324397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.324406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.324750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.324759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.325095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.325104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.325503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.325512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-07-16 00:41:24.325572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-07-16 00:41:24.325581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.325901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.325911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.326156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.326166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.326532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.326542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.326874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.326884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.327241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.327252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.327484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.327494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.327698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.327708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.328129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.328138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.328599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.328609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.328984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.328993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.329327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.329337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.329715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.329725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.330061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.330070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.330431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.330441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.330693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.330702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.330898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.330908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.331279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.331291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.331715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.331724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.332069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.332080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.332436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.332447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.332809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.332819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.333158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.333167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.333240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.333249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.333624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.333633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.333989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.333998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.334066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.334074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.334382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.334391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.334771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.334781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.334975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.334986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.335359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.335369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.335729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.335739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.336116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.336126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.336555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.336566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.336897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.860 [2024-07-16 00:41:24.336906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.860 qpair failed and we were unable to recover it. 00:30:10.860 [2024-07-16 00:41:24.337156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.337165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.337518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.337529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.337722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.337732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.338103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.338114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.338485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.338494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.338872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.338881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.339086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.339095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.339497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.339507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.339852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.339861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.340196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.340209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.340492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.340502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.340831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.340840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.341217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.341226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.341559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.341569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.341899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.341908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.342155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.342166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.342392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.342402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.342622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.342632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.342989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.342999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.343442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.343452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.343784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.343794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:10.861 [2024-07-16 00:41:24.344128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.344140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:10.861 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:10.861 [2024-07-16 00:41:24.344572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.344584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:10.861 [2024-07-16 00:41:24.344874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.344885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.861 [2024-07-16 00:41:24.345250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.345262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.345629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.345639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.345984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.345994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.346201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.346211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.346551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.346561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.346740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.346750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.347111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.347121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.347468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.347478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.347807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.347818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.348156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.348166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.348538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.348549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.348878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.348889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.349212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.349223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.349477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.349488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.349682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.861 [2024-07-16 00:41:24.349692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.861 qpair failed and we were unable to recover it. 00:30:10.861 [2024-07-16 00:41:24.350088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.350098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.350469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.350479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.350814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.350824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.351162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.351172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.351559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.351568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.351875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.351885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.352239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.352250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.352631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.352642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.352860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.352870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.353202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.353216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.353550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.353561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.353741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.353750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.354090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.354099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.354491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.354501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.354685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.354694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.354893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.354902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.355011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.355021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Read completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 Write completed with error (sct=0, sc=8) 00:30:10.862 starting I/O failed 00:30:10.862 [2024-07-16 00:41:24.355771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.862 [2024-07-16 00:41:24.356123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.356164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8b4000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.356599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.356630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8b4000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.357038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.357066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa8b4000b90 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.357540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.357579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.357977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.357990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.358478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.358515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.358910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.358922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.358978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.358988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.359119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.359128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.359483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.359495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.359876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.359887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.360258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.360269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.360686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.862 [2024-07-16 00:41:24.360697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.862 qpair failed and we were unable to recover it. 00:30:10.862 [2024-07-16 00:41:24.360940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.360950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.361210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.361219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.361579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.361590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.361786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.361795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.362050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.362062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.362426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.362437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.362811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.362821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.363032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.363043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.363311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.363321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.363663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.363673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.364048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.364058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.364263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.364273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.364450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.364460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.364882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.364895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.365296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.365306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.365644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.365654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.366010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.366019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.366345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.366356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.366721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.366731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.366991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.367001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.367262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.367272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.367584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.367593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.367940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.367950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.368322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.368332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.368702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.368712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.369087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.369097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.369428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.369437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.369772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.369782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.370113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.370122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.370513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.370523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.370836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.370847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.371221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.371235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.371598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.371608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.371974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.371983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.372318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.372329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.372661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.372671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.373000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.373009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.373349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.373359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.373540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.373551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.373916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.373926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.374287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.374299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.863 [2024-07-16 00:41:24.374688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.863 [2024-07-16 00:41:24.374697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.863 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.374880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.374889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.375083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.375093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.375290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.375301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.375638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.375648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.375984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.375994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.376227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.376241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.376544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.376554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.376753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.376762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.377070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.377080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.377276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.377285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.377563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.377573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.377943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.377952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.378286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.378296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.378501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.378510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.378845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.378854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.379199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.379209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.379609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.379620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.380016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.380025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.380360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.380369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.380732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.380742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.381073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.381082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.381270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.381280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.381477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.381487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.381821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.381830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.382176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.382187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.382541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.382551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.382732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.382742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.382864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.382874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.864 [2024-07-16 00:41:24.383214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.383225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.383571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.383582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:10.864 [2024-07-16 00:41:24.383864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.383875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.864 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.864 [2024-07-16 00:41:24.384239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.384251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.384489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.384498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.384892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.384902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.385060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.385069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.385393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.385404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.385730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.385740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.386066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.386078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.386325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.864 [2024-07-16 00:41:24.386335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.864 qpair failed and we were unable to recover it. 00:30:10.864 [2024-07-16 00:41:24.386690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.386699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.387031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.387040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.387361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.387371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.387726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.387735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.388089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.388098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.388317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.388326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.388684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.388693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.389024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.389033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.389285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.389295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.389638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.389647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.389971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.389980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.390307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.390317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.390697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.390706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.391079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.391089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.391481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.391490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.391824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.391834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.392156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.392165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.392520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.392530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.392793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.392803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.393138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.393148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.393354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.393364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.393775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.393784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.393968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.393978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.394354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.394363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.394709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.394719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.395052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.395062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.395445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.395455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.395675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.395684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.396034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.396044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.396394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.396404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.396734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.396744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.396919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.396931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.397263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.397273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.397480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.397489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.865 qpair failed and we were unable to recover it. 00:30:10.865 [2024-07-16 00:41:24.397770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.865 [2024-07-16 00:41:24.397780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.398166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.398175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.398508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.398518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.398874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.398886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.399225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.399239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 Malloc0 00:30:10.866 [2024-07-16 00:41:24.399574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.399584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.399794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.399804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.400046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.400056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.866 [2024-07-16 00:41:24.400267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.400278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:10.866 [2024-07-16 00:41:24.400516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.400525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.866 [2024-07-16 00:41:24.400868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.400878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.866 [2024-07-16 00:41:24.401216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.401227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.401476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.401486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.401834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.401844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.402057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.402066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.402429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.402438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.402808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.402817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.403153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.403163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.403517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.403528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.403611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.403620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.403957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.403966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.404160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.404169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.404542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.404551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.404897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.404906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.405242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.405253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.405586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.405596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.405989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.405998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.406175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.406185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.406561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.406571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.406885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.866 [2024-07-16 00:41:24.406902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.406912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.407254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.407270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.407488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.407498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.407860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.407870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.408256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.408266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.408685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.408694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.409033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.409042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.409395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.409405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.409765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.409775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.409986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.866 [2024-07-16 00:41:24.409996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.866 qpair failed and we were unable to recover it. 00:30:10.866 [2024-07-16 00:41:24.410350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.410360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.410705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.410715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.410963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.410972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.411338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.411348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.411639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.411648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.411853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.411865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.412133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.412143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.412227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.412239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.412567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.412576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.412904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.412913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.413245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.413255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.413620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.413629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.413959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.413969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.414298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.414308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.414664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.414673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.414872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.414882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.415203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.415212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.415627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.415637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.867 [2024-07-16 00:41:24.416011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.416023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.867 [2024-07-16 00:41:24.416389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.416399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.867 [2024-07-16 00:41:24.416747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.416757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.867 [2024-07-16 00:41:24.417001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.417011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.417388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.417397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.417728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.417739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.418092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.418102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.418424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.418434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.418782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.418792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.419037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.419046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.419340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.419350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.419697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.419706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.420088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.420099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.420428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.420439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.420822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.420832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.421177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.421186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.421534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.421543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.421778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.421787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.422144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.422154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.422493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.422503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.422842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.422851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.423086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.867 [2024-07-16 00:41:24.423095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.867 qpair failed and we were unable to recover it. 00:30:10.867 [2024-07-16 00:41:24.423482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.423491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.423829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.423838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.424178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.424188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.424517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.424527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.424889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.424899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.425256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.425266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.425611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.425620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.425972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.425981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.426314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.426323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.426679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.426689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.426881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.426890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.427098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.427108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.427514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.427523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.427860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.427869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.868 [2024-07-16 00:41:24.428226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.428240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:10.868 [2024-07-16 00:41:24.428457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.428467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.868 [2024-07-16 00:41:24.428826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.428838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.868 [2024-07-16 00:41:24.429276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.429286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.429515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.429525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.429897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.429907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.430239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.430248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.430589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.430600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.430935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.430945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.431331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.431352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.431587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.431596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.431871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.431880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.432119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.432129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.432472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.432481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.432812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.432822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.432982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.432994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.433056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.433065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.433452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.433462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.433803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.433813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.434123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.434132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.434493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.434503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.434838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.434848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.435099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.435109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.435338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.435349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.435755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.435765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.868 [2024-07-16 00:41:24.436107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.868 [2024-07-16 00:41:24.436116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.868 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.436413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.436422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.436849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.436858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.437111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.437120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.437466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.437477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.437835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.437845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.438177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.438187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.438398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.438407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.438753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.438762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.438922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.438932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.439201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.439211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.439556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.439566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.439755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.439765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.869 [2024-07-16 00:41:24.440137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.440147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.869 [2024-07-16 00:41:24.440384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.440394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.869 [2024-07-16 00:41:24.440763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.440772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.869 [2024-07-16 00:41:24.441151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.441162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.441372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.441382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.441654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.441663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.441861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.441870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.442076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.442085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.442350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.442360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.442568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.442577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.442797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.442807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.443010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.443020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.443219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.443233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.443595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.443606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.443985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.443995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.444220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.444234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.444605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.444616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.444950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.444959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.445317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.445327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.445680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.869 [2024-07-16 00:41:24.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.869 qpair failed and we were unable to recover it. 00:30:10.869 [2024-07-16 00:41:24.445885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-16 00:41:24.445894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-16 00:41:24.446110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-16 00:41:24.446119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-16 00:41:24.446534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-16 00:41:24.446544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-16 00:41:24.446883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.870 [2024-07-16 00:41:24.446892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4b50 with addr=10.0.0.2, port=4420 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 [2024-07-16 00:41:24.447155] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.870 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.870 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:10.870 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.870 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.870 [2024-07-16 00:41:24.457704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.870 [2024-07-16 00:41:24.457815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.870 [2024-07-16 00:41:24.457833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.870 [2024-07-16 00:41:24.457841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.870 [2024-07-16 00:41:24.457848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:10.870 [2024-07-16 00:41:24.457866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.870 qpair failed and we were unable to recover it. 00:30:10.870 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.870 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1292614 00:30:11.133 [2024-07-16 00:41:24.467738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.467876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.467892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.467899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.467906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.467920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.477727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.477799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.477814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.477821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.477827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.477841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.487668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.487740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.487756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.487763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.487768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.487782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.497687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.497758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.497774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.497781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.497787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.497801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.507626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.507722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.507738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.507748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.507755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.507768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.517754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.517824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.517839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.517846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.517852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.517865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.527766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.527863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.527879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.527885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.527891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.527905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.537781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.537897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.537922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.537930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.537937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.537956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.547794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.547896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.547921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.547930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.547937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.547955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.557720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.557786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.557803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.557810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.557817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.557832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.567823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.567896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.567911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.567918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.567924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.567938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.133 [2024-07-16 00:41:24.577889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.133 [2024-07-16 00:41:24.577987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.133 [2024-07-16 00:41:24.578004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.133 [2024-07-16 00:41:24.578011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.133 [2024-07-16 00:41:24.578018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.133 [2024-07-16 00:41:24.578033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.133 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.587906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.588007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.588032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.588040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.588047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.588065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.597980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.598050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.598075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.598088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.598095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.598114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.607953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.608052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.608069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.608076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.608082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.608096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.617984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.618064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.618080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.618086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.618092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.618106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.628084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.628152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.628167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.628174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.628180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.628193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.638049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.638119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.638135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.638141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.638147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.638161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.648083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.648151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.648167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.648173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.648179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.648192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.658125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.658263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.658279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.658286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.658292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.658307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.668143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.668210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.668225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.668237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.668243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.668256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.678160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.678227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.678246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.678252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.678258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.678272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.688195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.688268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.688283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.688293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.688299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.688313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.698256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.698331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.698347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.698353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.698359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.698373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.708211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.708281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.708296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.708303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.708308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.708321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.718297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.718367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.718382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.134 [2024-07-16 00:41:24.718388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.134 [2024-07-16 00:41:24.718394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.134 [2024-07-16 00:41:24.718408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.134 qpair failed and we were unable to recover it. 00:30:11.134 [2024-07-16 00:41:24.728290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.134 [2024-07-16 00:41:24.728354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.134 [2024-07-16 00:41:24.728369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.135 [2024-07-16 00:41:24.728376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.135 [2024-07-16 00:41:24.728381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.135 [2024-07-16 00:41:24.728395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.135 qpair failed and we were unable to recover it. 00:30:11.135 [2024-07-16 00:41:24.738510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.135 [2024-07-16 00:41:24.738619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.135 [2024-07-16 00:41:24.738634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.135 [2024-07-16 00:41:24.738641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.135 [2024-07-16 00:41:24.738647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.135 [2024-07-16 00:41:24.738660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.135 qpair failed and we were unable to recover it. 00:30:11.135 [2024-07-16 00:41:24.748481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.135 [2024-07-16 00:41:24.748550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.135 [2024-07-16 00:41:24.748565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.135 [2024-07-16 00:41:24.748572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.135 [2024-07-16 00:41:24.748578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.135 [2024-07-16 00:41:24.748591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.135 qpair failed and we were unable to recover it. 00:30:11.135 [2024-07-16 00:41:24.758344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.135 [2024-07-16 00:41:24.758409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.135 [2024-07-16 00:41:24.758424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.135 [2024-07-16 00:41:24.758430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.135 [2024-07-16 00:41:24.758437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.135 [2024-07-16 00:41:24.758450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.135 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.768348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.768418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.768433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.768440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.768446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.768459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.778460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.778529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.778552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.778559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.778565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.778579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.788460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.788524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.788539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.788546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.788552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.788566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.798491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.798554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.798570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.798577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.798583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.798596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.808518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.808586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.808601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.808608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.808614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.808627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.818573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.818650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.818665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.818672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.818678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.818695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.828579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.828643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.828658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.828665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.828671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.828684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.838576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.838641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.838657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.838663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.838669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.838683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.848510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.848579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.848594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.848601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.848607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.848620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.858665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.858736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.858751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.858758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.858764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.858777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.398 [2024-07-16 00:41:24.868701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.398 [2024-07-16 00:41:24.868769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.398 [2024-07-16 00:41:24.868788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.398 [2024-07-16 00:41:24.868795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.398 [2024-07-16 00:41:24.868801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.398 [2024-07-16 00:41:24.868814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.398 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.878700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.878769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.878784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.878791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.878797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.878810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.888731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.888798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.888813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.888820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.888826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.888839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.898767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.898840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.898854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.898861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.898867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.898880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.908787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.908853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.908868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.908875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.908881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.908897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.918808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.918873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.918888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.918894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.918900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.918913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.928851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.928920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.928935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.928942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.928948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.928961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.938917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.938996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.939021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.939029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.939036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.939055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.948903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.948974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.948999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.949007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.949014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.949032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.958912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.958984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.959013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.959022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.959028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.959047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.968955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.969032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.969057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.969065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.969071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.969090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.979000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.979074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.979091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.979098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.979104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.979119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.989033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.989106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.989122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.989129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.989135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.989149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:24.999042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:24.999108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:24.999124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:24.999131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:24.999137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:24.999155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.399 [2024-07-16 00:41:25.009075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.399 [2024-07-16 00:41:25.009144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.399 [2024-07-16 00:41:25.009159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.399 [2024-07-16 00:41:25.009166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.399 [2024-07-16 00:41:25.009173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.399 [2024-07-16 00:41:25.009186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.399 qpair failed and we were unable to recover it. 00:30:11.400 [2024-07-16 00:41:25.019166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.400 [2024-07-16 00:41:25.019248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.400 [2024-07-16 00:41:25.019263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.400 [2024-07-16 00:41:25.019270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.400 [2024-07-16 00:41:25.019276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.400 [2024-07-16 00:41:25.019290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.400 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.029131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.029200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.029217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.029227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.029241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.029255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.039169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.039235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.039251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.039258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.039264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.039278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.049252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.049321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.049341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.049348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.049354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.049368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.059219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.059294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.059310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.059317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.059324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.059337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.069237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.069310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.069325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.069332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.069338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.069352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.079297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.079363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.079379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.079387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.079394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.079408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.089316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.089387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.089403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.089409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.089419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.089433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.099355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.099435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.099451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.099458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.099464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.099477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.109400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.109468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.109483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.662 [2024-07-16 00:41:25.109490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.662 [2024-07-16 00:41:25.109496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.662 [2024-07-16 00:41:25.109509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.662 qpair failed and we were unable to recover it. 00:30:11.662 [2024-07-16 00:41:25.119406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.662 [2024-07-16 00:41:25.119471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.662 [2024-07-16 00:41:25.119486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.119493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.119499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.119513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.129434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.129502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.129517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.129524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.129530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.129544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.139474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.139548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.139564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.139571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.139577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.139591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.149488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.149558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.149574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.149581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.149587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.149600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.159527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.159595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.159610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.159617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.159623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.159637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.169554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.169626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.169641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.169647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.169653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.169666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.179571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.179640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.179655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.179662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.179672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.179687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.189471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.189545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.189560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.189567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.189573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.189586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.199629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.199692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.199707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.199714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.199720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.663 [2024-07-16 00:41:25.199734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.663 qpair failed and we were unable to recover it. 00:30:11.663 [2024-07-16 00:41:25.209541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.663 [2024-07-16 00:41:25.209621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.663 [2024-07-16 00:41:25.209637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.663 [2024-07-16 00:41:25.209645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.663 [2024-07-16 00:41:25.209651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.209666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.219665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.219739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.219755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.219762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.219768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.219782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.229653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.229721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.229736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.229743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.229749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.229763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.239716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.239787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.239801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.239808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.239814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.239828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.249770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.249838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.249853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.249859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.249866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.249879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.259764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.259830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.259846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.259852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.259859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.259873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.269721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.269786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.269802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.269809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.269819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.269832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.279848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.279917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.279932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.279939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.279945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.279958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.664 [2024-07-16 00:41:25.289878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.664 [2024-07-16 00:41:25.289948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.664 [2024-07-16 00:41:25.289973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.664 [2024-07-16 00:41:25.289982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.664 [2024-07-16 00:41:25.289988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.664 [2024-07-16 00:41:25.290007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.664 qpair failed and we were unable to recover it. 00:30:11.929 [2024-07-16 00:41:25.299900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.929 [2024-07-16 00:41:25.299977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.929 [2024-07-16 00:41:25.300002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.929 [2024-07-16 00:41:25.300010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.929 [2024-07-16 00:41:25.300017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.929 [2024-07-16 00:41:25.300035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.929 qpair failed and we were unable to recover it. 00:30:11.929 [2024-07-16 00:41:25.309919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.929 [2024-07-16 00:41:25.309989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.929 [2024-07-16 00:41:25.310014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.929 [2024-07-16 00:41:25.310023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.929 [2024-07-16 00:41:25.310029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.929 [2024-07-16 00:41:25.310048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.929 qpair failed and we were unable to recover it. 00:30:11.929 [2024-07-16 00:41:25.319929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.929 [2024-07-16 00:41:25.320004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.929 [2024-07-16 00:41:25.320021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.929 [2024-07-16 00:41:25.320029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.929 [2024-07-16 00:41:25.320035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.929 [2024-07-16 00:41:25.320050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.929 qpair failed and we were unable to recover it. 00:30:11.929 [2024-07-16 00:41:25.329860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.929 [2024-07-16 00:41:25.329929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.929 [2024-07-16 00:41:25.329944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.929 [2024-07-16 00:41:25.329951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.929 [2024-07-16 00:41:25.329957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.929 [2024-07-16 00:41:25.329971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.929 qpair failed and we were unable to recover it. 00:30:11.929 [2024-07-16 00:41:25.340008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.929 [2024-07-16 00:41:25.340079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.929 [2024-07-16 00:41:25.340095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.929 [2024-07-16 00:41:25.340101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.929 [2024-07-16 00:41:25.340108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.929 [2024-07-16 00:41:25.340121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.929 qpair failed and we were unable to recover it. 00:30:11.929 [2024-07-16 00:41:25.350018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.929 [2024-07-16 00:41:25.350085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.929 [2024-07-16 00:41:25.350100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.929 [2024-07-16 00:41:25.350107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.350113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.350126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.360047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.360110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.360126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.360133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.360143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.360157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.370084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.370154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.370170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.370177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.370183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.370196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.380001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.380081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.380097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.380103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.380109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.380123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.390129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.390194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.390209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.390216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.390222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.390242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.400178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.400243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.400259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.400266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.400272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.400285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.410208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.410279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.410295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.410302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.410308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.410322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.420246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.420318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.420333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.420340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.420346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.420360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.430135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.430208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.430223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.430236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.430242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.430256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.440292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.440361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.440377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.440383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.440390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.440403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.450307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.450376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.450391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.450402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.450408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.450421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.460328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.460403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.460419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.460426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.460431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.460446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.470359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.470430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.470446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.470453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.470459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.470473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.480396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.480463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.480478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.930 [2024-07-16 00:41:25.480484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.930 [2024-07-16 00:41:25.480490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.930 [2024-07-16 00:41:25.480504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.930 qpair failed and we were unable to recover it. 00:30:11.930 [2024-07-16 00:41:25.490425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.930 [2024-07-16 00:41:25.490492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.930 [2024-07-16 00:41:25.490507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.931 [2024-07-16 00:41:25.490514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.931 [2024-07-16 00:41:25.490520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.931 [2024-07-16 00:41:25.490533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.931 qpair failed and we were unable to recover it. 00:30:11.931 [2024-07-16 00:41:25.500418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.931 [2024-07-16 00:41:25.500482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.931 [2024-07-16 00:41:25.500497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.931 [2024-07-16 00:41:25.500504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.931 [2024-07-16 00:41:25.500510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.931 [2024-07-16 00:41:25.500523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.931 qpair failed and we were unable to recover it. 00:30:11.931 [2024-07-16 00:41:25.510477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.931 [2024-07-16 00:41:25.510547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.931 [2024-07-16 00:41:25.510562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.931 [2024-07-16 00:41:25.510569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.931 [2024-07-16 00:41:25.510575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.931 [2024-07-16 00:41:25.510587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.931 qpair failed and we were unable to recover it. 00:30:11.931 [2024-07-16 00:41:25.520513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.931 [2024-07-16 00:41:25.520579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.931 [2024-07-16 00:41:25.520594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.931 [2024-07-16 00:41:25.520601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.931 [2024-07-16 00:41:25.520607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.931 [2024-07-16 00:41:25.520620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.931 qpair failed and we were unable to recover it. 00:30:11.931 [2024-07-16 00:41:25.530515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.931 [2024-07-16 00:41:25.530581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.931 [2024-07-16 00:41:25.530596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.931 [2024-07-16 00:41:25.530603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.931 [2024-07-16 00:41:25.530609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.931 [2024-07-16 00:41:25.530622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.931 qpair failed and we were unable to recover it. 00:30:11.931 [2024-07-16 00:41:25.540544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.931 [2024-07-16 00:41:25.540615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.931 [2024-07-16 00:41:25.540631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.931 [2024-07-16 00:41:25.540641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.931 [2024-07-16 00:41:25.540647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.931 [2024-07-16 00:41:25.540660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.931 qpair failed and we were unable to recover it. 00:30:11.931 [2024-07-16 00:41:25.550563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.931 [2024-07-16 00:41:25.550625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.931 [2024-07-16 00:41:25.550641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.931 [2024-07-16 00:41:25.550648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.931 [2024-07-16 00:41:25.550654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:11.931 [2024-07-16 00:41:25.550667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.931 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.560626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.560690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.560706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.560713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.560719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.560732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.570628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.570692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.570708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.570715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.570721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.570734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.580687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.580790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.580805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.580812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.580818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.580831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.590688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.590752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.590767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.590774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.590780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.590793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.600722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.600793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.600808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.600815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.600821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.600835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.610629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.610698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.610713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.610720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.610726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.610740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.620779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.620846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.620861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.620868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.620874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.620887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.630811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.630872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.630888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.630898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.630904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.630917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.640865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.640954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.640970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.640976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.640982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.640996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.650911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.650976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.650991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.650997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.651004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.651017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.660891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.660969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.660994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.661003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.661009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.661028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.670933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.671006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.671032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.671041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.671049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.671069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.680961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.681036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.228 [2024-07-16 00:41:25.681061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.228 [2024-07-16 00:41:25.681070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.228 [2024-07-16 00:41:25.681076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.228 [2024-07-16 00:41:25.681095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.228 qpair failed and we were unable to recover it. 00:30:12.228 [2024-07-16 00:41:25.690976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.228 [2024-07-16 00:41:25.691046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.691063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.691070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.691076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.691091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.701003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.701073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.701089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.701096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.701102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.701116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.711011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.711076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.711091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.711098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.711104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.711118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.721084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.721155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.721174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.721181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.721187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.721201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.731095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.731166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.731181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.731188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.731194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.731207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.741123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.741204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.741219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.741226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.741237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.741251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.751159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.751224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.751244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.751251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.751258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.751271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.761219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.761299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.761319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.761326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.761332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.761348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.771221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.771292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.771309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.771316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.771322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.771336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.781136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.781213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.781228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.781241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.781248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.781262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.791265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.791331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.791347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.791353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.791360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.791373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.801297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.801360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.801375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.801382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.801388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.801402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.811217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.811291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.811310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.811317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.811323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.811336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.821374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.821452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.821467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.821474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.821479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.229 [2024-07-16 00:41:25.821493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.229 qpair failed and we were unable to recover it. 00:30:12.229 [2024-07-16 00:41:25.831378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.229 [2024-07-16 00:41:25.831460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.229 [2024-07-16 00:41:25.831479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.229 [2024-07-16 00:41:25.831487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.229 [2024-07-16 00:41:25.831493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.230 [2024-07-16 00:41:25.831507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.230 qpair failed and we were unable to recover it. 00:30:12.230 [2024-07-16 00:41:25.841365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.230 [2024-07-16 00:41:25.841463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.230 [2024-07-16 00:41:25.841478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.230 [2024-07-16 00:41:25.841485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.230 [2024-07-16 00:41:25.841491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.230 [2024-07-16 00:41:25.841505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.230 qpair failed and we were unable to recover it. 00:30:12.230 [2024-07-16 00:41:25.851447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.230 [2024-07-16 00:41:25.851514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.230 [2024-07-16 00:41:25.851529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.230 [2024-07-16 00:41:25.851537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.230 [2024-07-16 00:41:25.851543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.230 [2024-07-16 00:41:25.851560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.230 qpair failed and we were unable to recover it. 00:30:12.492 [2024-07-16 00:41:25.861480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.492 [2024-07-16 00:41:25.861551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.492 [2024-07-16 00:41:25.861567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.492 [2024-07-16 00:41:25.861574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.492 [2024-07-16 00:41:25.861580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.492 [2024-07-16 00:41:25.861594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.492 qpair failed and we were unable to recover it. 00:30:12.492 [2024-07-16 00:41:25.871492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.492 [2024-07-16 00:41:25.871563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.492 [2024-07-16 00:41:25.871579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.492 [2024-07-16 00:41:25.871586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.492 [2024-07-16 00:41:25.871592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.492 [2024-07-16 00:41:25.871606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.492 qpair failed and we were unable to recover it. 00:30:12.492 [2024-07-16 00:41:25.881533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.881597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.881612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.881619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.881625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.881638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.891550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.891617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.891634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.891643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.891649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.891664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.901630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.901712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.901732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.901739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.901745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.901758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.911500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.911592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.911607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.911614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.911620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.911634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.921711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.921778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.921793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.921800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.921806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.921819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.931546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.931613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.931628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.931635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.931641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.931654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.941680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.941754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.941769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.941776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.941782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.941799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.951713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.951779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.951794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.951801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.951807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.951820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.961734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.961835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.961850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.961857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.961863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.961877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.971758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.971916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.971932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.971939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.971945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.971958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.981752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.981820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.981835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.493 [2024-07-16 00:41:25.981842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.493 [2024-07-16 00:41:25.981848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.493 [2024-07-16 00:41:25.981862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.493 qpair failed and we were unable to recover it. 00:30:12.493 [2024-07-16 00:41:25.991819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.493 [2024-07-16 00:41:25.991893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.493 [2024-07-16 00:41:25.991911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:25.991918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:25.991924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:25.991937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.001745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.001806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.001823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.001830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.001836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.001851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.011834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.011902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.011918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.011925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.011931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.011944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.021900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.021978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.022003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.022011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.022018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.022037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.031949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.032018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.032044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.032052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.032059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.032082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.041946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.042014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.042039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.042047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.042054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.042072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.052000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.052068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.052085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.052092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.052098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.052113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.062062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.062144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.062160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.062167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.062173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.062187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.071916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.072066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.072082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.072088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.072094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.072108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.082048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.082114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.082134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.082141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.082147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.082160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.092118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.092236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.092251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.092258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.494 [2024-07-16 00:41:26.092264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.494 [2024-07-16 00:41:26.092278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.494 qpair failed and we were unable to recover it. 00:30:12.494 [2024-07-16 00:41:26.102108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.494 [2024-07-16 00:41:26.102209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.494 [2024-07-16 00:41:26.102226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.494 [2024-07-16 00:41:26.102239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.495 [2024-07-16 00:41:26.102246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.495 [2024-07-16 00:41:26.102260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.495 qpair failed and we were unable to recover it. 00:30:12.495 [2024-07-16 00:41:26.112129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.495 [2024-07-16 00:41:26.112195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.495 [2024-07-16 00:41:26.112211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.495 [2024-07-16 00:41:26.112218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.495 [2024-07-16 00:41:26.112224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.495 [2024-07-16 00:41:26.112244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.495 qpair failed and we were unable to recover it. 00:30:12.495 [2024-07-16 00:41:26.122155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.495 [2024-07-16 00:41:26.122218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.495 [2024-07-16 00:41:26.122239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.495 [2024-07-16 00:41:26.122246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.495 [2024-07-16 00:41:26.122256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.495 [2024-07-16 00:41:26.122270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.495 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.132191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.132296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.132312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.132319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.132325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.132339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.142241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.142308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.142325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.142332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.142338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.142354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.152330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.152409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.152425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.152432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.152438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.152452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.162283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.162350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.162365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.162372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.162378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.162392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.172304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.172419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.172435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.172442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.172448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.172461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.182331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.182397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.182412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.182419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.182425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.182439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.192336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.192404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.192419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.192426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.192432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.192445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.202271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.202339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.202354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.756 [2024-07-16 00:41:26.202361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.756 [2024-07-16 00:41:26.202367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.756 [2024-07-16 00:41:26.202381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.756 qpair failed and we were unable to recover it. 00:30:12.756 [2024-07-16 00:41:26.212463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.756 [2024-07-16 00:41:26.212529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.756 [2024-07-16 00:41:26.212544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.212551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.212561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.212574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.222460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.222531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.222546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.222553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.222559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.222573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.232488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.232553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.232569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.232576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.232582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.232595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.242528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.242592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.242608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.242615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.242621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.242635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.252599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.252708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.252723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.252730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.252736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.252749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.262591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.262667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.262682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.262689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.262695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.262709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.272485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.272556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.272571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.272578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.272584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.272597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.282625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.282688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.282703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.282710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.282716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.282730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.292639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.292706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.292721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.292728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.292734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.292747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.302652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.302723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.302738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.302745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.302754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.302768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.312586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.312654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.312671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.312678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.312684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.312698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.322714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.322812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.322828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.322835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.322841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.322855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.332715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.332787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.332803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.332810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.332816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.332829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.342767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.342898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.342913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.342920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.342926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.342939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.352848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.352912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.352928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.352935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.352941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.352954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.362841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.362906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.362921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.362928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.362935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.362948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.372820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.372935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.372951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.372958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.372964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.372977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:12.757 [2024-07-16 00:41:26.382950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.757 [2024-07-16 00:41:26.383067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.757 [2024-07-16 00:41:26.383083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.757 [2024-07-16 00:41:26.383090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.757 [2024-07-16 00:41:26.383096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:12.757 [2024-07-16 00:41:26.383110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:12.757 qpair failed and we were unable to recover it. 00:30:13.018 [2024-07-16 00:41:26.392921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.018 [2024-07-16 00:41:26.392991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.018 [2024-07-16 00:41:26.393006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.018 [2024-07-16 00:41:26.393016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.018 [2024-07-16 00:41:26.393023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.018 [2024-07-16 00:41:26.393036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.018 qpair failed and we were unable to recover it. 00:30:13.018 [2024-07-16 00:41:26.402851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.018 [2024-07-16 00:41:26.402914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.018 [2024-07-16 00:41:26.402929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.018 [2024-07-16 00:41:26.402936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.018 [2024-07-16 00:41:26.402942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.018 [2024-07-16 00:41:26.402955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.018 qpair failed and we were unable to recover it. 00:30:13.018 [2024-07-16 00:41:26.412986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.018 [2024-07-16 00:41:26.413054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.018 [2024-07-16 00:41:26.413069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.018 [2024-07-16 00:41:26.413076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.018 [2024-07-16 00:41:26.413082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.018 [2024-07-16 00:41:26.413095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.018 qpair failed and we were unable to recover it. 00:30:13.018 [2024-07-16 00:41:26.423012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.018 [2024-07-16 00:41:26.423089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.018 [2024-07-16 00:41:26.423104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.018 [2024-07-16 00:41:26.423111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.018 [2024-07-16 00:41:26.423117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.018 [2024-07-16 00:41:26.423131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.018 qpair failed and we were unable to recover it. 00:30:13.018 [2024-07-16 00:41:26.433037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.018 [2024-07-16 00:41:26.433101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.018 [2024-07-16 00:41:26.433116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.018 [2024-07-16 00:41:26.433123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.018 [2024-07-16 00:41:26.433129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.018 [2024-07-16 00:41:26.433142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.018 qpair failed and we were unable to recover it. 00:30:13.018 [2024-07-16 00:41:26.443126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.443250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.443266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.443274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.443280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.443293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.453081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.453152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.453167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.453174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.453180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.453193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.463116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.463192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.463208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.463215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.463221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.463239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.473204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.473270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.473286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.473293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.473299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.473312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.483189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.483259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.483275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.483285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.483291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.483305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.493190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.493265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.493280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.493287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.493293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.493307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.503210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.503315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.503331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.503338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.503345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.503359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.513274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.513337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.513353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.513360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.513366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.513380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.523348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.523415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.523430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.523437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.523443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.523457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.533337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.533436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.533451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.533458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.533464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.533477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.543330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.543399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.543414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.543421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.543427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.543440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.553384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.553452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.553467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.553474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.553480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.553493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.563411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.563476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.563490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.563497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.563503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.563516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.573463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.573529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.573544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.573554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.019 [2024-07-16 00:41:26.573560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.019 [2024-07-16 00:41:26.573573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.019 qpair failed and we were unable to recover it. 00:30:13.019 [2024-07-16 00:41:26.583347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.019 [2024-07-16 00:41:26.583422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.019 [2024-07-16 00:41:26.583437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.019 [2024-07-16 00:41:26.583444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.020 [2024-07-16 00:41:26.583450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.020 [2024-07-16 00:41:26.583463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.020 qpair failed and we were unable to recover it. 00:30:13.020 [2024-07-16 00:41:26.593366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.020 [2024-07-16 00:41:26.593433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.020 [2024-07-16 00:41:26.593448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.020 [2024-07-16 00:41:26.593454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.020 [2024-07-16 00:41:26.593460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.020 [2024-07-16 00:41:26.593473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.020 qpair failed and we were unable to recover it. 00:30:13.020 [2024-07-16 00:41:26.603496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.020 [2024-07-16 00:41:26.603566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.020 [2024-07-16 00:41:26.603581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.020 [2024-07-16 00:41:26.603588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.020 [2024-07-16 00:41:26.603594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.020 [2024-07-16 00:41:26.603607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.020 qpair failed and we were unable to recover it. 00:30:13.020 [2024-07-16 00:41:26.613545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.020 [2024-07-16 00:41:26.613618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.020 [2024-07-16 00:41:26.613633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.020 [2024-07-16 00:41:26.613639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.020 [2024-07-16 00:41:26.613645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.020 [2024-07-16 00:41:26.613658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.020 qpair failed and we were unable to recover it. 00:30:13.020 [2024-07-16 00:41:26.623570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.020 [2024-07-16 00:41:26.623644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.020 [2024-07-16 00:41:26.623659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.020 [2024-07-16 00:41:26.623665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.020 [2024-07-16 00:41:26.623671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.020 [2024-07-16 00:41:26.623685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.020 qpair failed and we were unable to recover it. 00:30:13.020 [2024-07-16 00:41:26.633545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.020 [2024-07-16 00:41:26.633613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.020 [2024-07-16 00:41:26.633628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.020 [2024-07-16 00:41:26.633634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.020 [2024-07-16 00:41:26.633640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.020 [2024-07-16 00:41:26.633653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.020 qpair failed and we were unable to recover it. 00:30:13.020 [2024-07-16 00:41:26.643623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.020 [2024-07-16 00:41:26.643686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.020 [2024-07-16 00:41:26.643701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.020 [2024-07-16 00:41:26.643708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.020 [2024-07-16 00:41:26.643714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.020 [2024-07-16 00:41:26.643727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.020 qpair failed and we were unable to recover it. 00:30:13.281 [2024-07-16 00:41:26.653647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.281 [2024-07-16 00:41:26.653715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.281 [2024-07-16 00:41:26.653730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.281 [2024-07-16 00:41:26.653737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.281 [2024-07-16 00:41:26.653743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.281 [2024-07-16 00:41:26.653756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.281 qpair failed and we were unable to recover it. 00:30:13.281 [2024-07-16 00:41:26.663682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.281 [2024-07-16 00:41:26.663777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.281 [2024-07-16 00:41:26.663796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.281 [2024-07-16 00:41:26.663803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.281 [2024-07-16 00:41:26.663809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.281 [2024-07-16 00:41:26.663822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.281 qpair failed and we were unable to recover it. 00:30:13.281 [2024-07-16 00:41:26.673697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.281 [2024-07-16 00:41:26.673761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.281 [2024-07-16 00:41:26.673776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.281 [2024-07-16 00:41:26.673783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.281 [2024-07-16 00:41:26.673789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.281 [2024-07-16 00:41:26.673803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.281 qpair failed and we were unable to recover it. 00:30:13.281 [2024-07-16 00:41:26.683739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.281 [2024-07-16 00:41:26.683805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.281 [2024-07-16 00:41:26.683820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.281 [2024-07-16 00:41:26.683827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.281 [2024-07-16 00:41:26.683833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.281 [2024-07-16 00:41:26.683847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.693753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.693823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.693838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.693845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.693851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.693864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.703681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.703752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.703767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.703774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.703780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.703793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.713808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.713871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.713886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.713893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.713899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.713912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.723838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.723908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.723923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.723930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.723936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.723949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.733862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.733947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.733962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.733969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.733975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.733988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.743893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.743990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.744005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.744012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.744018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.744032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.753916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.753984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.754003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.754009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.754015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.754028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.763942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.764009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.764025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.764032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.764038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.764051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.773986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.774094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.774109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.774116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.774123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.774136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.784007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.784128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.784143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.784150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.784156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.784170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.794034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.794099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.794114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.794121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.794127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.794144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.804054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.804117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.804132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.804139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.804145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.804158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.814105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.814181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.814196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.814202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.814208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.814222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.824018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.824094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.282 [2024-07-16 00:41:26.824110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.282 [2024-07-16 00:41:26.824117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.282 [2024-07-16 00:41:26.824123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.282 [2024-07-16 00:41:26.824136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.282 qpair failed and we were unable to recover it. 00:30:13.282 [2024-07-16 00:41:26.834220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.282 [2024-07-16 00:41:26.834334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.834350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.834357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.834363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.834376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.283 [2024-07-16 00:41:26.844197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.283 [2024-07-16 00:41:26.844297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.844316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.844323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.844328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.844343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.283 [2024-07-16 00:41:26.854265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.283 [2024-07-16 00:41:26.854376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.854391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.854399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.854405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.854418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.283 [2024-07-16 00:41:26.864266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.283 [2024-07-16 00:41:26.864338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.864353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.864360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.864366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.864380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.283 [2024-07-16 00:41:26.874277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.283 [2024-07-16 00:41:26.874346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.874362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.874369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.874375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.874388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.283 [2024-07-16 00:41:26.884301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.283 [2024-07-16 00:41:26.884363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.884379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.884386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.884392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.884410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.283 [2024-07-16 00:41:26.894415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.283 [2024-07-16 00:41:26.894523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.894538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.894545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.894551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.894564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.283 [2024-07-16 00:41:26.904362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.283 [2024-07-16 00:41:26.904464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.283 [2024-07-16 00:41:26.904479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.283 [2024-07-16 00:41:26.904486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.283 [2024-07-16 00:41:26.904492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.283 [2024-07-16 00:41:26.904505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.283 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.914289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.914365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.914380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.914387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.914393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.914407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.924413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.924482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.924497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.924504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.924510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.924523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.934475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.934543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.934561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.934568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.934574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.934587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.944449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.944524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.944539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.944546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.944552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.944566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.954537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.954608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.954623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.954630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.954636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.954650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.964535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.964602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.964617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.964624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.964630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.964644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.974563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.974646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.974666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.974674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.974680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.974698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.984596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.984673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.984693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.984701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.984707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.984721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:26.994640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:26.994705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:26.994721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:26.994728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:26.994734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:26.994748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:27.004652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:27.004715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:27.004730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:27.004737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:27.004743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:27.004756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:27.014654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:27.014724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:27.014739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.545 [2024-07-16 00:41:27.014746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.545 [2024-07-16 00:41:27.014752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.545 [2024-07-16 00:41:27.014765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.545 qpair failed and we were unable to recover it. 00:30:13.545 [2024-07-16 00:41:27.024706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.545 [2024-07-16 00:41:27.024777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.545 [2024-07-16 00:41:27.024799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.024806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.024812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.024826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.034739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.034848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.034863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.034870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.034876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.034890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.044788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.044853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.044869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.044875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.044882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.044895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.054837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.054953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.054969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.054976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.054982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.054995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.064815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.064911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.064927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.064934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.064944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.064958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.074743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.074809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.074824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.074831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.074837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.074850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.084877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.084960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.084975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.084982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.084988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.085002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.094913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.094980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.094997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.095004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.095010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.095025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.104921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.104993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.105010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.105016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.105022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.105036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.114953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.115022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.115037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.115044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.115051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.115064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.124992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.125057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.125072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.125079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.125085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.125098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.135236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.135312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.135329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.135336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.135342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.135357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.144934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.145006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.145021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.145028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.145034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.145048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.155068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.155127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.155142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.155149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.155158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.155172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.546 qpair failed and we were unable to recover it. 00:30:13.546 [2024-07-16 00:41:27.165085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.546 [2024-07-16 00:41:27.165153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.546 [2024-07-16 00:41:27.165169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.546 [2024-07-16 00:41:27.165176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.546 [2024-07-16 00:41:27.165182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.546 [2024-07-16 00:41:27.165195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.547 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.175129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.175194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.175210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.175216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.175222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.175241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.185200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.185314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.185330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.185336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.185343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.185356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.195202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.195284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.195300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.195307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.195313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.195326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.205211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.205284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.205300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.205307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.205312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.205326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.215258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.215359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.215374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.215381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.215387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.215400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.225272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.225346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.225362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.225369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.225374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.225388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.235312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.235379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.235394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.235401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.235407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.235420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.245323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.245392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.245407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.245414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.245423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.245437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.255258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.255325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.255340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.255347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.255352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.255366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.265394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.265467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.265482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.265489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.265495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.265509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.275371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.275436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.275451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.275458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.275464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.275477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.285473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.285538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.285553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.285560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.285566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.807 [2024-07-16 00:41:27.285579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.807 qpair failed and we were unable to recover it. 00:30:13.807 [2024-07-16 00:41:27.295506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.807 [2024-07-16 00:41:27.295574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.807 [2024-07-16 00:41:27.295590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.807 [2024-07-16 00:41:27.295596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.807 [2024-07-16 00:41:27.295602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.295615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.305389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.305459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.305475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.305482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.305487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.305501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.315511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.315583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.315598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.315605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.315611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.315624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.325559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.325626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.325641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.325648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.325654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.325668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.335597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.335688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.335702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.335713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.335719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.335732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.345554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.345628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.345643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.345650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.345656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.345669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.355671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.355733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.355747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.355754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.355760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.355773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.365657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.365761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.365777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.365784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.365790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.365804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.375736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.375802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.375817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.375824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.375830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.375843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.385718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.385788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.385803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.385810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.385816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.385830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.395678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.395740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.395755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.395762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.395768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.395781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.405770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.405842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.405867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.405875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.405882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.405900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.415696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.415766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.415783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.415790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.415796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.808 [2024-07-16 00:41:27.415811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.808 qpair failed and we were unable to recover it. 00:30:13.808 [2024-07-16 00:41:27.425840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.808 [2024-07-16 00:41:27.425969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.808 [2024-07-16 00:41:27.425993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.808 [2024-07-16 00:41:27.426006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.808 [2024-07-16 00:41:27.426013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.809 [2024-07-16 00:41:27.426032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.809 qpair failed and we were unable to recover it. 00:30:13.809 [2024-07-16 00:41:27.435823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.809 [2024-07-16 00:41:27.435887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.809 [2024-07-16 00:41:27.435905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.809 [2024-07-16 00:41:27.435912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.809 [2024-07-16 00:41:27.435918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:13.809 [2024-07-16 00:41:27.435933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.809 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.445881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.445950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.445966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.445973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.445979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.445994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.455924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.455990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.456006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.456013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.456019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.456033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.465836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.465908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.465924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.465931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.465937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.465952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.475937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.475998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.476015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.476022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.476028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.476041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.486000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.486111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.486126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.486133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.486139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.486153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.496031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.496098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.496114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.496121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.496127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.496140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.506067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.506142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.506157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.506164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.506170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.506184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.515940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.516011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.516026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.516037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.516044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.516057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.526130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.526199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.526214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.526221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.526227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.526247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.536160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.536233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.536249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.536256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.536262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.536275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.546157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.546226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.546245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.546252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.546258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.546272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.556157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.556219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.556240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.556247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.556253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.556266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-16 00:41:27.566228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.126 [2024-07-16 00:41:27.566292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.126 [2024-07-16 00:41:27.566308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.126 [2024-07-16 00:41:27.566315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.126 [2024-07-16 00:41:27.566321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.126 [2024-07-16 00:41:27.566335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.576259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.576329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.576344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.576351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.576357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.576370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.586293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.586361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.586376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.586383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.586389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.586403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.596268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.596330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.596346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.596352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.596358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.596372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.606332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.606399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.606417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.606424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.606430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.606444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.616474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.616541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.616556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.616562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.616568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.616581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.626403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.626476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.626492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.626499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.626505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.626518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.636929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.636989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.637004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.637010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.637016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.637030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.646462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.646531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.646546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.646552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.646559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.646572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.656522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.656591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.656606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.656613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.656619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.656631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.666428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.666501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.666516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.666523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.666529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.666542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.676484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.676551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.676566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.676573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.676579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.676592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.686621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.686696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.686712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.686718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.686724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.686737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.696617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.696683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.127 [2024-07-16 00:41:27.696702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.127 [2024-07-16 00:41:27.696709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.127 [2024-07-16 00:41:27.696715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.127 [2024-07-16 00:41:27.696728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-16 00:41:27.706630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.127 [2024-07-16 00:41:27.706700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.128 [2024-07-16 00:41:27.706715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.128 [2024-07-16 00:41:27.706722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.128 [2024-07-16 00:41:27.706728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.128 [2024-07-16 00:41:27.706741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-16 00:41:27.716615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.128 [2024-07-16 00:41:27.716679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.128 [2024-07-16 00:41:27.716694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.128 [2024-07-16 00:41:27.716701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.128 [2024-07-16 00:41:27.716708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.128 [2024-07-16 00:41:27.716721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-16 00:41:27.726784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.128 [2024-07-16 00:41:27.726850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.128 [2024-07-16 00:41:27.726866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.128 [2024-07-16 00:41:27.726873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.128 [2024-07-16 00:41:27.726879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.128 [2024-07-16 00:41:27.726892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-16 00:41:27.736702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.128 [2024-07-16 00:41:27.736770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.128 [2024-07-16 00:41:27.736786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.128 [2024-07-16 00:41:27.736792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.128 [2024-07-16 00:41:27.736798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.128 [2024-07-16 00:41:27.736815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-16 00:41:27.746769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.128 [2024-07-16 00:41:27.746841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.128 [2024-07-16 00:41:27.746857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.128 [2024-07-16 00:41:27.746864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.128 [2024-07-16 00:41:27.746869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.128 [2024-07-16 00:41:27.746883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.389 [2024-07-16 00:41:27.756615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.389 [2024-07-16 00:41:27.756728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.389 [2024-07-16 00:41:27.756743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.389 [2024-07-16 00:41:27.756751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.389 [2024-07-16 00:41:27.756764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.389 [2024-07-16 00:41:27.756779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.389 qpair failed and we were unable to recover it. 00:30:14.389 [2024-07-16 00:41:27.766781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.389 [2024-07-16 00:41:27.766846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.389 [2024-07-16 00:41:27.766862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.389 [2024-07-16 00:41:27.766869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.389 [2024-07-16 00:41:27.766875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.389 [2024-07-16 00:41:27.766888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.389 qpair failed and we were unable to recover it. 00:30:14.389 [2024-07-16 00:41:27.776752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.389 [2024-07-16 00:41:27.776863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.389 [2024-07-16 00:41:27.776878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.389 [2024-07-16 00:41:27.776885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.389 [2024-07-16 00:41:27.776891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.389 [2024-07-16 00:41:27.776904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.389 qpair failed and we were unable to recover it. 00:30:14.389 [2024-07-16 00:41:27.786865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.389 [2024-07-16 00:41:27.786937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.389 [2024-07-16 00:41:27.786971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.389 [2024-07-16 00:41:27.786980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.389 [2024-07-16 00:41:27.786986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.389 [2024-07-16 00:41:27.787004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.389 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.796803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.796869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.796893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.796902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.796908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.796928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.806873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.806943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.806969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.806977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.806984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.807002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.816895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.816959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.816976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.816983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.816989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.817004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.826973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.827061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.827086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.827094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.827101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.827125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.836935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.836994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.837010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.837017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.837024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.837038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.846957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.847018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.847034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.847040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.847047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.847061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.857080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.857144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.857159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.857166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.857172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.857186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.866938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.867005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.867021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.867028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.867034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.867048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.877043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.877103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.877122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.877129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.877135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.877148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.887076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.390 [2024-07-16 00:41:27.887133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.390 [2024-07-16 00:41:27.887148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.390 [2024-07-16 00:41:27.887155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.390 [2024-07-16 00:41:27.887161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.390 [2024-07-16 00:41:27.887174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.390 qpair failed and we were unable to recover it. 00:30:14.390 [2024-07-16 00:41:27.897134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.897202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.897218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.897224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.897236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.897250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.907150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.907280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.907296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.907303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.907309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.907323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.917140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.917200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.917215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.917222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.917228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.917251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.927174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.927239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.927255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.927262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.927268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.927281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.937258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.937336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.937351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.937358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.937364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.937378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.947299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.947370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.947385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.947392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.947398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.947411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.957263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.957325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.957340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.957347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.957353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.957367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.967287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.967353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.967372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.967379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.967385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.967399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.977373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.977441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.977456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.977463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.977469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.977482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.987379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.987491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.987510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.391 [2024-07-16 00:41:27.987517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.391 [2024-07-16 00:41:27.987523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.391 [2024-07-16 00:41:27.987538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.391 qpair failed and we were unable to recover it. 00:30:14.391 [2024-07-16 00:41:27.997390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.391 [2024-07-16 00:41:27.997458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.391 [2024-07-16 00:41:27.997474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.392 [2024-07-16 00:41:27.997481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.392 [2024-07-16 00:41:27.997487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.392 [2024-07-16 00:41:27.997501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.392 qpair failed and we were unable to recover it. 00:30:14.392 [2024-07-16 00:41:28.007462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.392 [2024-07-16 00:41:28.007525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.392 [2024-07-16 00:41:28.007542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.392 [2024-07-16 00:41:28.007553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.392 [2024-07-16 00:41:28.007563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.392 [2024-07-16 00:41:28.007577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.392 qpair failed and we were unable to recover it. 00:30:14.392 [2024-07-16 00:41:28.017481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.392 [2024-07-16 00:41:28.017549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.392 [2024-07-16 00:41:28.017565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.392 [2024-07-16 00:41:28.017573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.392 [2024-07-16 00:41:28.017579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.392 [2024-07-16 00:41:28.017593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.392 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.027477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.027549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.027565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.027571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.027578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.027592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.037472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.037551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.037566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.037574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.037580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.037593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.047382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.047443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.047458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.047465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.047471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.047484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.057579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.057650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.057665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.057672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.057678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.057691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.067564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.067638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.067653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.067660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.067666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.067680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.077589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.077649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.077664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.077671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.077677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.077690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.087616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.087689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.087705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.087712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.087718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.087732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.097686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.097751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.097766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.097774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.097783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.097796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.107699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.107774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.654 [2024-07-16 00:41:28.107789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.654 [2024-07-16 00:41:28.107796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.654 [2024-07-16 00:41:28.107802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.654 [2024-07-16 00:41:28.107816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.654 qpair failed and we were unable to recover it. 00:30:14.654 [2024-07-16 00:41:28.117688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.654 [2024-07-16 00:41:28.117745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.117761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.117767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.117773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.117786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.127762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.127842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.127857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.127863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.127869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.127883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.137793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.137860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.137877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.137884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.137890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.137904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.147805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.147912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.147928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.147935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.147941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.147955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.157797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.157864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.157880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.157887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.157893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.157906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.167715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.167780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.167796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.167803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.167809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.167823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.177856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.177922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.177938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.177945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.177951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.177965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.187925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.187992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.188007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.188014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.188024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.188037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.197816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.197875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.197891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.197897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.197903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.197916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.207927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.207986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.208000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.208007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.208013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.208026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.217991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.218055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.655 [2024-07-16 00:41:28.218071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.655 [2024-07-16 00:41:28.218077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.655 [2024-07-16 00:41:28.218083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.655 [2024-07-16 00:41:28.218097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.655 qpair failed and we were unable to recover it. 00:30:14.655 [2024-07-16 00:41:28.227922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.655 [2024-07-16 00:41:28.227991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.656 [2024-07-16 00:41:28.228007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.656 [2024-07-16 00:41:28.228014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.656 [2024-07-16 00:41:28.228020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.656 [2024-07-16 00:41:28.228035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.656 qpair failed and we were unable to recover it. 00:30:14.656 [2024-07-16 00:41:28.238017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.656 [2024-07-16 00:41:28.238086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.656 [2024-07-16 00:41:28.238102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.656 [2024-07-16 00:41:28.238109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.656 [2024-07-16 00:41:28.238115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.656 [2024-07-16 00:41:28.238128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.656 qpair failed and we were unable to recover it. 00:30:14.656 [2024-07-16 00:41:28.248056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.656 [2024-07-16 00:41:28.248120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.656 [2024-07-16 00:41:28.248135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.656 [2024-07-16 00:41:28.248142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.656 [2024-07-16 00:41:28.248148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.656 [2024-07-16 00:41:28.248162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.656 qpair failed and we were unable to recover it. 00:30:14.656 [2024-07-16 00:41:28.258196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.656 [2024-07-16 00:41:28.258273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.656 [2024-07-16 00:41:28.258289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.656 [2024-07-16 00:41:28.258296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.656 [2024-07-16 00:41:28.258302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.656 [2024-07-16 00:41:28.258316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.656 qpair failed and we were unable to recover it. 00:30:14.656 [2024-07-16 00:41:28.268104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.656 [2024-07-16 00:41:28.268179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.656 [2024-07-16 00:41:28.268194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.656 [2024-07-16 00:41:28.268201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.656 [2024-07-16 00:41:28.268207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.656 [2024-07-16 00:41:28.268221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.656 qpair failed and we were unable to recover it. 00:30:14.656 [2024-07-16 00:41:28.278116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.656 [2024-07-16 00:41:28.278179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.656 [2024-07-16 00:41:28.278194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.656 [2024-07-16 00:41:28.278205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.656 [2024-07-16 00:41:28.278211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.656 [2024-07-16 00:41:28.278224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.656 qpair failed and we were unable to recover it. 00:30:14.917 [2024-07-16 00:41:28.288144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.917 [2024-07-16 00:41:28.288203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.917 [2024-07-16 00:41:28.288219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.917 [2024-07-16 00:41:28.288225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.917 [2024-07-16 00:41:28.288236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.917 [2024-07-16 00:41:28.288250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.917 qpair failed and we were unable to recover it. 00:30:14.917 [2024-07-16 00:41:28.298204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.917 [2024-07-16 00:41:28.298274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.917 [2024-07-16 00:41:28.298289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.917 [2024-07-16 00:41:28.298296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.917 [2024-07-16 00:41:28.298302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.917 [2024-07-16 00:41:28.298316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.917 qpair failed and we were unable to recover it. 00:30:14.917 [2024-07-16 00:41:28.308238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.917 [2024-07-16 00:41:28.308312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.917 [2024-07-16 00:41:28.308327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.917 [2024-07-16 00:41:28.308334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.917 [2024-07-16 00:41:28.308340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.917 [2024-07-16 00:41:28.308353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.917 qpair failed and we were unable to recover it. 00:30:14.917 [2024-07-16 00:41:28.318226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.917 [2024-07-16 00:41:28.318292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.917 [2024-07-16 00:41:28.318306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.917 [2024-07-16 00:41:28.318313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.917 [2024-07-16 00:41:28.318319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.917 [2024-07-16 00:41:28.318333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.917 qpair failed and we were unable to recover it. 00:30:14.917 [2024-07-16 00:41:28.328186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.917 [2024-07-16 00:41:28.328255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.917 [2024-07-16 00:41:28.328270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.917 [2024-07-16 00:41:28.328277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.917 [2024-07-16 00:41:28.328283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.917 [2024-07-16 00:41:28.328297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.917 qpair failed and we were unable to recover it. 00:30:14.917 [2024-07-16 00:41:28.338216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.917 [2024-07-16 00:41:28.338289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.917 [2024-07-16 00:41:28.338304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.917 [2024-07-16 00:41:28.338311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.338317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.338331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.348368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.348464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.348479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.348486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.348492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.348505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.358368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.358478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.358493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.358500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.358507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.358520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.368455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.368517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.368532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.368542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.368548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.368562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.378452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.378518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.378533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.378539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.378545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.378558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.388467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.388571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.388586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.388593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.388599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.388612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.398460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.398531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.398547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.398554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.398560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.398573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.408479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.408535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.408550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.408556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.408562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.408576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.418582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.418675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.418690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.418697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.418703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.418716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.428597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.428709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.428725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.428732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.428738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.428752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.438561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.438633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.438648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.438654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.438660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.438674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.448579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.448644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.448659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.448666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.448672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.448685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.458645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.458709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.458724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.918 [2024-07-16 00:41:28.458735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.918 [2024-07-16 00:41:28.458741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.918 [2024-07-16 00:41:28.458754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.918 qpair failed and we were unable to recover it. 00:30:14.918 [2024-07-16 00:41:28.468665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.918 [2024-07-16 00:41:28.468741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.918 [2024-07-16 00:41:28.468757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.468763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.468769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.468782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:14.919 [2024-07-16 00:41:28.478672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.919 [2024-07-16 00:41:28.478734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.919 [2024-07-16 00:41:28.478749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.478756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.478762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.478776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:14.919 [2024-07-16 00:41:28.488582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.919 [2024-07-16 00:41:28.488648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.919 [2024-07-16 00:41:28.488663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.488670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.488676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.488690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:14.919 [2024-07-16 00:41:28.498793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.919 [2024-07-16 00:41:28.498860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.919 [2024-07-16 00:41:28.498876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.498883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.498889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.498902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:14.919 [2024-07-16 00:41:28.508785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.919 [2024-07-16 00:41:28.508860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.919 [2024-07-16 00:41:28.508875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.508882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.508888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.508902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:14.919 [2024-07-16 00:41:28.518771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.919 [2024-07-16 00:41:28.518829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.919 [2024-07-16 00:41:28.518844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.518851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.518857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.518871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:14.919 [2024-07-16 00:41:28.528765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.919 [2024-07-16 00:41:28.528826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.919 [2024-07-16 00:41:28.528841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.528848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.528854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.528868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:14.919 [2024-07-16 00:41:28.538873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.919 [2024-07-16 00:41:28.538936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.919 [2024-07-16 00:41:28.538951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.919 [2024-07-16 00:41:28.538958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.919 [2024-07-16 00:41:28.538964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:14.919 [2024-07-16 00:41:28.538978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.919 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:41:28.548900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.180 [2024-07-16 00:41:28.548968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.180 [2024-07-16 00:41:28.548991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.180 [2024-07-16 00:41:28.548998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.180 [2024-07-16 00:41:28.549003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.180 [2024-07-16 00:41:28.549016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:41:28.558877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.180 [2024-07-16 00:41:28.558937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.180 [2024-07-16 00:41:28.558953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.180 [2024-07-16 00:41:28.558960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.180 [2024-07-16 00:41:28.558966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.180 [2024-07-16 00:41:28.558979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:41:28.568900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.180 [2024-07-16 00:41:28.568960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.180 [2024-07-16 00:41:28.568975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.180 [2024-07-16 00:41:28.568982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.180 [2024-07-16 00:41:28.568988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.180 [2024-07-16 00:41:28.569001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:41:28.578875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.180 [2024-07-16 00:41:28.578954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.180 [2024-07-16 00:41:28.578969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.180 [2024-07-16 00:41:28.578975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.180 [2024-07-16 00:41:28.578982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.180 [2024-07-16 00:41:28.578995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.589006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.589082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.589097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.589104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.589110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.589123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.598994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.599057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.599072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.599079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.599085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.599099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.609019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.609083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.609098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.609105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.609111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.609124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.619094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.619157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.619172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.619179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.619185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.619198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.629079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.629170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.629185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.629192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.629198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.629212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.639111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.639171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.639190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.639197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.639203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.639217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.649120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.649177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.649192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.649199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.649205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.649218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.659183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.659275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.659291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.659298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.659304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.659317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.669207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.669280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.669295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.669302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.669308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.669322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.679190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.679265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.679280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.679287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.679293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.679310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.689162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.689267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.689283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.689290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.689297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.689311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.699309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.699379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.699394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.699401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.699407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.699420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.709216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.709287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.709302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.709309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.709315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.709329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.719340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.719403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.719419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.719425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.719432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.719445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.729358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.729425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.729443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.729450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.729456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.729470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.739418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.739517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.739532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.739539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.739545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.739559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.749331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.749413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.749428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.749435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.749441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.749454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.759410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.759479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.759494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.759501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.759507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.759521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.769458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.769516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.769531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.769538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.769544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.769560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.779408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.779475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.779492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.779499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.779505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.779518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.789536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.789607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.789622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.789629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.789635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.789649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.799509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.799594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.799609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.799615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.799621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.799635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:41:28.809535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.181 [2024-07-16 00:41:28.809597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.181 [2024-07-16 00:41:28.809613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.181 [2024-07-16 00:41:28.809619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.181 [2024-07-16 00:41:28.809625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.181 [2024-07-16 00:41:28.809639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.442 [2024-07-16 00:41:28.819618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.442 [2024-07-16 00:41:28.819685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.442 [2024-07-16 00:41:28.819703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.442 [2024-07-16 00:41:28.819710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.442 [2024-07-16 00:41:28.819716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.442 [2024-07-16 00:41:28.819730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.442 qpair failed and we were unable to recover it. 00:30:15.442 [2024-07-16 00:41:28.829638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.442 [2024-07-16 00:41:28.829708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.442 [2024-07-16 00:41:28.829723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.442 [2024-07-16 00:41:28.829730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.442 [2024-07-16 00:41:28.829736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.442 [2024-07-16 00:41:28.829749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.442 qpair failed and we were unable to recover it. 00:30:15.442 [2024-07-16 00:41:28.839640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.442 [2024-07-16 00:41:28.839703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.442 [2024-07-16 00:41:28.839719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.442 [2024-07-16 00:41:28.839725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.442 [2024-07-16 00:41:28.839731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.442 [2024-07-16 00:41:28.839744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.442 qpair failed and we were unable to recover it. 00:30:15.442 [2024-07-16 00:41:28.849681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.442 [2024-07-16 00:41:28.849740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.442 [2024-07-16 00:41:28.849755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.442 [2024-07-16 00:41:28.849761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.442 [2024-07-16 00:41:28.849767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.442 [2024-07-16 00:41:28.849780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.442 qpair failed and we were unable to recover it. 00:30:15.442 [2024-07-16 00:41:28.859781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.442 [2024-07-16 00:41:28.859856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.442 [2024-07-16 00:41:28.859871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.442 [2024-07-16 00:41:28.859878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.442 [2024-07-16 00:41:28.859884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.442 [2024-07-16 00:41:28.859901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.442 qpair failed and we were unable to recover it. 00:30:15.442 [2024-07-16 00:41:28.869755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.442 [2024-07-16 00:41:28.869826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.442 [2024-07-16 00:41:28.869841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.442 [2024-07-16 00:41:28.869848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.442 [2024-07-16 00:41:28.869854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.442 [2024-07-16 00:41:28.869867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.442 qpair failed and we were unable to recover it. 00:30:15.442 [2024-07-16 00:41:28.879743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.442 [2024-07-16 00:41:28.879804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.442 [2024-07-16 00:41:28.879820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.442 [2024-07-16 00:41:28.879827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.879833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.879846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.889766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.889830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.889845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.889851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.889857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.889871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.899839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.899905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.899921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.899928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.899934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.899947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.909856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.909929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.909947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.909954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.909960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.909974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.919843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.919934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.919949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.919956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.919962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.919975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.929872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.929947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.929972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.929980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.929986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.930004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.939955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.940031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.940056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.940065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.940071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.940090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.949969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.950046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.950071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.950079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.950090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.950109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.959949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.960007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.960024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.960031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.960038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.960052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.969873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.969953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.969970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.969977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.969983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.969998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.980051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.980158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.980174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.980181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.980187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.980200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:28.990077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:28.990149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:28.990164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:28.990171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:28.990177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:28.990190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:29.000065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:29.000133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:29.000148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:29.000155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:29.000161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:29.000175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:29.010080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:29.010154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:29.010169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:29.010176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:29.010182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:29.010196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.443 qpair failed and we were unable to recover it. 00:30:15.443 [2024-07-16 00:41:29.020121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.443 [2024-07-16 00:41:29.020192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.443 [2024-07-16 00:41:29.020208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.443 [2024-07-16 00:41:29.020215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.443 [2024-07-16 00:41:29.020221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.443 [2024-07-16 00:41:29.020239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.444 qpair failed and we were unable to recover it. 00:30:15.444 [2024-07-16 00:41:29.030184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.444 [2024-07-16 00:41:29.030285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.444 [2024-07-16 00:41:29.030301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.444 [2024-07-16 00:41:29.030307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.444 [2024-07-16 00:41:29.030314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.444 [2024-07-16 00:41:29.030327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.444 qpair failed and we were unable to recover it. 00:30:15.444 [2024-07-16 00:41:29.040218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.444 [2024-07-16 00:41:29.040285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.444 [2024-07-16 00:41:29.040301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.444 [2024-07-16 00:41:29.040308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.444 [2024-07-16 00:41:29.040318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.444 [2024-07-16 00:41:29.040331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.444 qpair failed and we were unable to recover it. 00:30:15.444 [2024-07-16 00:41:29.050102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.444 [2024-07-16 00:41:29.050217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.444 [2024-07-16 00:41:29.050237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.444 [2024-07-16 00:41:29.050244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.444 [2024-07-16 00:41:29.050250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.444 [2024-07-16 00:41:29.050264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.444 qpair failed and we were unable to recover it. 00:30:15.444 [2024-07-16 00:41:29.060259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.444 [2024-07-16 00:41:29.060333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.444 [2024-07-16 00:41:29.060349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.444 [2024-07-16 00:41:29.060356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.444 [2024-07-16 00:41:29.060362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.444 [2024-07-16 00:41:29.060376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.444 qpair failed and we were unable to recover it. 00:30:15.444 [2024-07-16 00:41:29.070281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.444 [2024-07-16 00:41:29.070394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.444 [2024-07-16 00:41:29.070409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.444 [2024-07-16 00:41:29.070416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.444 [2024-07-16 00:41:29.070422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.444 [2024-07-16 00:41:29.070435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.444 qpair failed and we were unable to recover it. 00:30:15.704 [2024-07-16 00:41:29.080180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.704 [2024-07-16 00:41:29.080243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.704 [2024-07-16 00:41:29.080259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.704 [2024-07-16 00:41:29.080266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.704 [2024-07-16 00:41:29.080272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.704 [2024-07-16 00:41:29.080286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.704 qpair failed and we were unable to recover it. 00:30:15.704 [2024-07-16 00:41:29.090329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.704 [2024-07-16 00:41:29.090394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.704 [2024-07-16 00:41:29.090409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.704 [2024-07-16 00:41:29.090416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.704 [2024-07-16 00:41:29.090422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.704 [2024-07-16 00:41:29.090436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.704 qpair failed and we were unable to recover it. 00:30:15.704 [2024-07-16 00:41:29.100385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.704 [2024-07-16 00:41:29.100451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.704 [2024-07-16 00:41:29.100467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.704 [2024-07-16 00:41:29.100474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.704 [2024-07-16 00:41:29.100480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.704 [2024-07-16 00:41:29.100493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.704 qpair failed and we were unable to recover it. 00:30:15.704 [2024-07-16 00:41:29.110287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.704 [2024-07-16 00:41:29.110368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.110385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.110392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.110398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.110412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.120378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.120439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.120454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.120461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.120467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.120480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.130400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.130467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.130482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.130489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.130498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.130512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.140481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.140549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.140566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.140573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.140579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.140593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.150390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.150463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.150479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.150485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.150492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.150506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.160476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.160536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.160552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.160559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.160565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.160579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.170405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.170475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.170491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.170498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.170504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.170517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.180604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.180669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.180685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.180692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.180698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.180713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.190588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.190662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.190677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.190683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.190689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.190703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.200496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.200566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.200582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.200589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.200595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.200608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.210627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.210696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.210713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.210722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.210728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.210742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.220728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.220798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.220814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.220825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.220831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.220845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.230718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.230789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.230804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.230811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.230817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.230831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.240587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.240652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.240668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.240674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.240681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.240694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.250748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.250805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.250820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.250827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.250833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.250846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.260832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.260913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.260929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.260936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.260942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.260955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.270818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.270908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.270926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.270933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.270939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.270954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.280817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.280879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.280895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.280902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.280908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.280921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.290837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.290892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.290907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.290914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.290920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.290933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.300918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.300987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.301013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.705 [2024-07-16 00:41:29.301021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.705 [2024-07-16 00:41:29.301028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.705 [2024-07-16 00:41:29.301046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.705 qpair failed and we were unable to recover it. 00:30:15.705 [2024-07-16 00:41:29.310943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.705 [2024-07-16 00:41:29.311010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.705 [2024-07-16 00:41:29.311027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.706 [2024-07-16 00:41:29.311042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.706 [2024-07-16 00:41:29.311048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.706 [2024-07-16 00:41:29.311063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.706 qpair failed and we were unable to recover it. 00:30:15.706 [2024-07-16 00:41:29.320914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.706 [2024-07-16 00:41:29.320978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.706 [2024-07-16 00:41:29.320993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.706 [2024-07-16 00:41:29.321000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.706 [2024-07-16 00:41:29.321007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.706 [2024-07-16 00:41:29.321020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.706 qpair failed and we were unable to recover it. 00:30:15.706 [2024-07-16 00:41:29.330937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.706 [2024-07-16 00:41:29.331001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.706 [2024-07-16 00:41:29.331016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.706 [2024-07-16 00:41:29.331023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.706 [2024-07-16 00:41:29.331029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.706 [2024-07-16 00:41:29.331043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.706 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.341024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.341096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.341112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.966 [2024-07-16 00:41:29.341119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.966 [2024-07-16 00:41:29.341125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.966 [2024-07-16 00:41:29.341138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.351033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.351106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.351121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.966 [2024-07-16 00:41:29.351128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.966 [2024-07-16 00:41:29.351134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.966 [2024-07-16 00:41:29.351148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.361008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.361072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.361088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.966 [2024-07-16 00:41:29.361095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.966 [2024-07-16 00:41:29.361101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.966 [2024-07-16 00:41:29.361115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.371012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.371073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.371089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.966 [2024-07-16 00:41:29.371096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.966 [2024-07-16 00:41:29.371102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.966 [2024-07-16 00:41:29.371116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.381135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.381205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.381220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.966 [2024-07-16 00:41:29.381227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.966 [2024-07-16 00:41:29.381240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.966 [2024-07-16 00:41:29.381254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.391163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.391235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.391251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.966 [2024-07-16 00:41:29.391258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.966 [2024-07-16 00:41:29.391264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.966 [2024-07-16 00:41:29.391278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.401142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.401210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.401225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.966 [2024-07-16 00:41:29.401241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.966 [2024-07-16 00:41:29.401247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.966 [2024-07-16 00:41:29.401261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-16 00:41:29.411182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.966 [2024-07-16 00:41:29.411245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.966 [2024-07-16 00:41:29.411261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.411267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.411273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.411287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.421237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.421306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.421321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.421328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.421334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.421347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.431244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.431311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.431326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.431333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.431339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.431353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.441242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.441304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.441319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.441326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.441332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.441345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.451192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.451255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.451271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.451278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.451283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.451297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.461358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.461422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.461438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.461444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.461450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.461464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.471341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.471411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.471426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.471433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.471439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.471453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.481390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.481489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.481505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.481511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.481517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.481530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.491372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.491435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.491453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.491460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.491466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.491480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.501482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.501592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.501607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.501614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.501620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.501634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.511476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.511549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.511565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.511571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.511577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.511591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.521475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.521532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.521547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.521554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.521560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.521573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.531449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.531511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.531526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.531532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.531539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.531552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.541470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.541540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.541556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.541563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.541569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.541582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.551590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.551662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.551677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.551684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.551690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.551703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.561612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.561734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.561749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.561756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.561762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.561775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.571600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.571703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.571718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.571725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.571731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.571744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.581675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.581744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.581762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.581769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.581775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.581788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-16 00:41:29.591694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.967 [2024-07-16 00:41:29.591817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.967 [2024-07-16 00:41:29.591832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.967 [2024-07-16 00:41:29.591839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.967 [2024-07-16 00:41:29.591845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:15.967 [2024-07-16 00:41:29.591859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.967 qpair failed and we were unable to recover it. 00:30:16.229 [2024-07-16 00:41:29.601669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.229 [2024-07-16 00:41:29.601729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.229 [2024-07-16 00:41:29.601744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.229 [2024-07-16 00:41:29.601751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.229 [2024-07-16 00:41:29.601757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.229 [2024-07-16 00:41:29.601771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.229 qpair failed and we were unable to recover it. 00:30:16.229 [2024-07-16 00:41:29.611719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.229 [2024-07-16 00:41:29.611780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.229 [2024-07-16 00:41:29.611795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.229 [2024-07-16 00:41:29.611801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.229 [2024-07-16 00:41:29.611807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.229 [2024-07-16 00:41:29.611821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.229 qpair failed and we were unable to recover it. 00:30:16.229 [2024-07-16 00:41:29.621817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.229 [2024-07-16 00:41:29.621886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.229 [2024-07-16 00:41:29.621901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.229 [2024-07-16 00:41:29.621907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.229 [2024-07-16 00:41:29.621914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.229 [2024-07-16 00:41:29.621930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.229 qpair failed and we were unable to recover it. 00:30:16.229 [2024-07-16 00:41:29.631864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.229 [2024-07-16 00:41:29.631940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.229 [2024-07-16 00:41:29.631955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.229 [2024-07-16 00:41:29.631961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.229 [2024-07-16 00:41:29.631967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.229 [2024-07-16 00:41:29.631981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.229 qpair failed and we were unable to recover it. 00:30:16.229 [2024-07-16 00:41:29.641826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.229 [2024-07-16 00:41:29.641888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.229 [2024-07-16 00:41:29.641903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.229 [2024-07-16 00:41:29.641910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.641916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.641929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.651839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.651911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.651936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.651944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.651951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.651969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.661938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.662060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.662086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.662094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.662101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.662119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.671921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.672031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.672052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.672060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.672066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.672081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.681869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.681933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.681950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.681957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.681963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.681977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.691931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.691992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.692007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.692014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.692020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.692034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.702011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.702077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.702092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.702099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.702105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.702119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.712057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.712126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.712142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.712149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.712155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.712172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.722024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.722085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.722100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.722107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.722113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.722126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 [2024-07-16 00:41:29.732058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.230 [2024-07-16 00:41:29.732116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.230 [2024-07-16 00:41:29.732131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.230 [2024-07-16 00:41:29.732138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.230 [2024-07-16 00:41:29.732144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d4b50 00:30:16.230 [2024-07-16 00:41:29.732158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.230 qpair failed and we were unable to recover it. 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Read completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Write completed with error (sct=0, sc=8) 00:30:16.230 starting I/O failed 00:30:16.230 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Read completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Read completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Read completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Read completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 Write completed with error (sct=0, sc=8) 00:30:16.231 starting I/O failed 00:30:16.231 [2024-07-16 00:41:29.732481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.231 [2024-07-16 00:41:29.742011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.231 [2024-07-16 00:41:29.742070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.231 [2024-07-16 00:41:29.742086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.231 [2024-07-16 00:41:29.742092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.231 [2024-07-16 00:41:29.742096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa8ac000b90 00:30:16.231 [2024-07-16 00:41:29.742113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.231 qpair failed and we were unable to recover it. 00:30:16.231 [2024-07-16 00:41:29.752156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.231 [2024-07-16 00:41:29.752258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.231 [2024-07-16 00:41:29.752271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.231 [2024-07-16 00:41:29.752276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.231 [2024-07-16 00:41:29.752281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa8ac000b90 00:30:16.231 [2024-07-16 00:41:29.752292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.231 qpair failed and we were unable to recover it. 00:30:16.231 [2024-07-16 00:41:29.752592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5da750 is same with the state(5) to be set 00:30:16.231 [2024-07-16 00:41:29.762161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.231 [2024-07-16 00:41:29.762314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.231 [2024-07-16 00:41:29.762377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.231 [2024-07-16 00:41:29.762401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.231 [2024-07-16 00:41:29.762422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa8a4000b90 00:30:16.231 [2024-07-16 00:41:29.762475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.231 qpair failed and we were unable to recover it. 00:30:16.231 [2024-07-16 00:41:29.772195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.231 [2024-07-16 00:41:29.772338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.231 [2024-07-16 00:41:29.772380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.231 [2024-07-16 00:41:29.772400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.231 [2024-07-16 00:41:29.772419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa8a4000b90 00:30:16.231 [2024-07-16 00:41:29.772461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.231 qpair failed and we were unable to recover it. 00:30:16.231 [2024-07-16 00:41:29.782332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.231 [2024-07-16 00:41:29.782491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.231 [2024-07-16 00:41:29.782563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.231 [2024-07-16 00:41:29.782589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.231 [2024-07-16 00:41:29.782608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa8b4000b90 00:30:16.231 [2024-07-16 00:41:29.782663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.231 qpair failed and we were unable to recover it. 00:30:16.231 [2024-07-16 00:41:29.792244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.231 [2024-07-16 00:41:29.792369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.231 [2024-07-16 00:41:29.792402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.231 [2024-07-16 00:41:29.792419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.231 [2024-07-16 00:41:29.792433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa8b4000b90 00:30:16.231 [2024-07-16 00:41:29.792468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.231 qpair failed and we were unable to recover it. 00:30:16.231 [2024-07-16 00:41:29.792793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5da750 (9): Bad file descriptor 00:30:16.231 Initializing NVMe Controllers 00:30:16.231 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.231 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:16.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:16.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:16.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:16.231 Initialization complete. Launching workers. 00:30:16.231 Starting thread on core 1 00:30:16.231 Starting thread on core 2 00:30:16.231 Starting thread on core 3 00:30:16.231 Starting thread on core 0 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:16.231 00:30:16.231 real 0m11.307s 00:30:16.231 user 0m21.177s 00:30:16.231 sys 0m3.969s 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.231 ************************************ 00:30:16.231 END TEST nvmf_target_disconnect_tc2 00:30:16.231 ************************************ 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:16.231 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:16.231 rmmod nvme_tcp 00:30:16.492 rmmod nvme_fabrics 00:30:16.492 rmmod nvme_keyring 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1293294 ']' 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1293294 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1293294 ']' 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1293294 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1293294 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1293294' 00:30:16.492 killing process with pid 1293294 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1293294 00:30:16.492 00:41:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1293294 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.492 00:41:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.037 00:41:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:19.037 00:30:19.037 real 0m22.075s 00:30:19.037 user 0m48.768s 00:30:19.037 sys 0m10.394s 00:30:19.037 00:41:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:19.037 00:41:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:19.037 ************************************ 00:30:19.037 END TEST nvmf_target_disconnect 00:30:19.037 ************************************ 00:30:19.037 00:41:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:19.037 00:41:32 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:19.037 00:41:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:19.037 00:41:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.037 00:41:32 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:19.037 00:30:19.037 real 23m26.775s 00:30:19.037 user 47m31.000s 00:30:19.037 sys 7m39.473s 00:30:19.037 00:41:32 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:19.037 00:41:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.037 ************************************ 00:30:19.037 END TEST nvmf_tcp 00:30:19.037 ************************************ 00:30:19.037 00:41:32 -- common/autotest_common.sh@1142 -- # return 0 00:30:19.037 00:41:32 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:19.037 00:41:32 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.037 00:41:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:19.037 00:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.037 00:41:32 -- common/autotest_common.sh@10 -- # set +x 00:30:19.037 ************************************ 00:30:19.037 START TEST spdkcli_nvmf_tcp 00:30:19.037 ************************************ 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.037 * Looking for test storage... 00:30:19.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1295126 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1295126 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1295126 ']' 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:19.037 00:41:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.037 [2024-07-16 00:41:32.540821] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:30:19.037 [2024-07-16 00:41:32.540876] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295126 ] 00:30:19.037 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.037 [2024-07-16 00:41:32.608206] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.298 [2024-07-16 00:41:32.673871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.298 [2024-07-16 00:41:32.673873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.869 00:41:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:19.869 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:19.869 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:19.869 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:19.869 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:19.869 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:19.869 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:19.869 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:19.869 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:19.869 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:19.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:19.869 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:19.869 ' 00:30:22.415 [2024-07-16 00:41:35.668149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.357 [2024-07-16 00:41:36.880080] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:25.898 [2024-07-16 00:41:39.295129] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:27.810 [2024-07-16 00:41:41.377352] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:29.725 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:29.725 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:29.725 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:29.725 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:29.725 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:29.725 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:29.725 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:29.725 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.725 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.725 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:29.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:29.725 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:29.725 00:41:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.986 00:41:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:29.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:29.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:29.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:29.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:29.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:29.986 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:29.986 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:29.986 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:29.986 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:29.986 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:29.986 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:29.986 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:29.986 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:29.986 ' 00:30:35.269 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:35.269 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:35.269 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:35.269 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:35.269 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:35.269 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:35.269 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:35.269 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:35.269 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:35.269 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:35.269 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:35.269 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:35.269 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:35.269 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:35.269 00:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:35.269 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.269 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.269 00:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1295126 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1295126 ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1295126 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1295126 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1295126' 00:30:35.270 killing process with pid 1295126 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1295126 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1295126 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1295126 ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1295126 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1295126 ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1295126 00:30:35.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1295126) - No such process 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1295126 is not found' 00:30:35.270 Process with pid 1295126 is not found 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:35.270 00:30:35.270 real 0m16.274s 00:30:35.270 user 0m34.571s 00:30:35.270 sys 0m0.755s 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:35.270 00:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.270 ************************************ 00:30:35.270 END TEST spdkcli_nvmf_tcp 00:30:35.270 ************************************ 00:30:35.270 00:41:48 -- common/autotest_common.sh@1142 -- # return 0 00:30:35.270 00:41:48 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:35.270 00:41:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:35.270 00:41:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.270 00:41:48 -- common/autotest_common.sh@10 -- # set +x 00:30:35.270 ************************************ 00:30:35.270 START TEST nvmf_identify_passthru 00:30:35.270 ************************************ 00:30:35.270 00:41:48 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:35.270 * Looking for test storage... 00:30:35.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.270 00:41:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.270 00:41:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.270 00:41:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.270 00:41:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:35.270 00:41:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.270 00:41:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.270 00:41:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.270 00:41:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:35.270 00:41:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.270 00:41:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.270 00:41:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:35.270 00:41:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.270 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:35.271 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:35.271 00:41:48 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:35.271 00:41:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:43.415 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:43.415 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.415 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:43.416 Found net devices under 0000:31:00.0: cvl_0_0 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:43.416 Found net devices under 0000:31:00.1: cvl_0_1 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:43.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:30:43.416 00:30:43.416 --- 10.0.0.2 ping statistics --- 00:30:43.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.416 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.479 ms 00:30:43.416 00:30:43.416 --- 10.0.0.1 ping statistics --- 00:30:43.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.416 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:43.416 00:41:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:43.416 00:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.416 00:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:43.416 00:41:56 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:43.416 00:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:43.416 00:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:43.416 00:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:43.416 00:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:43.416 00:41:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:43.416 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.986 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:30:43.986 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:43.986 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:43.986 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:43.986 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.557 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:44.557 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:44.557 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:44.557 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.557 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:44.557 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:44.557 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.557 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1302633 00:30:44.557 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:44.557 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:44.557 00:41:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1302633 00:30:44.557 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1302633 ']' 00:30:44.557 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.557 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:44.558 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.558 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:44.558 00:41:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.558 [2024-07-16 00:41:58.023706] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:30:44.558 [2024-07-16 00:41:58.023766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.558 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.558 [2024-07-16 00:41:58.113352] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.558 [2024-07-16 00:41:58.181203] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.558 [2024-07-16 00:41:58.181247] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.558 [2024-07-16 00:41:58.181255] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.558 [2024-07-16 00:41:58.181262] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.558 [2024-07-16 00:41:58.181267] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.558 [2024-07-16 00:41:58.181479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.558 [2024-07-16 00:41:58.181648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.558 [2024-07-16 00:41:58.181806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.558 [2024-07-16 00:41:58.181806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:45.498 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.498 INFO: Log level set to 20 00:30:45.498 INFO: Requests: 00:30:45.498 { 00:30:45.498 "jsonrpc": "2.0", 00:30:45.498 "method": "nvmf_set_config", 00:30:45.498 "id": 1, 00:30:45.498 "params": { 00:30:45.498 "admin_cmd_passthru": { 00:30:45.498 "identify_ctrlr": true 00:30:45.498 } 00:30:45.498 } 00:30:45.498 } 00:30:45.498 00:30:45.498 INFO: response: 00:30:45.498 { 00:30:45.498 "jsonrpc": "2.0", 00:30:45.498 "id": 1, 00:30:45.498 "result": true 00:30:45.498 } 00:30:45.498 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.498 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.498 INFO: Setting log level to 20 00:30:45.498 INFO: Setting log level to 20 00:30:45.498 INFO: Log level set to 20 00:30:45.498 INFO: Log level set to 20 00:30:45.498 INFO: Requests: 00:30:45.498 { 00:30:45.498 "jsonrpc": "2.0", 00:30:45.498 "method": "framework_start_init", 00:30:45.498 "id": 1 00:30:45.498 } 00:30:45.498 00:30:45.498 INFO: Requests: 00:30:45.498 { 00:30:45.498 "jsonrpc": "2.0", 00:30:45.498 "method": "framework_start_init", 00:30:45.498 "id": 1 00:30:45.498 } 00:30:45.498 00:30:45.498 [2024-07-16 00:41:58.873660] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:45.498 INFO: response: 00:30:45.498 { 00:30:45.498 "jsonrpc": "2.0", 00:30:45.498 "id": 1, 00:30:45.498 "result": true 00:30:45.498 } 00:30:45.498 00:30:45.498 INFO: response: 00:30:45.498 { 00:30:45.498 "jsonrpc": "2.0", 00:30:45.498 "id": 1, 00:30:45.498 "result": true 00:30:45.498 } 00:30:45.498 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.498 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.498 INFO: Setting log level to 40 00:30:45.498 INFO: Setting log level to 40 00:30:45.498 INFO: Setting log level to 40 00:30:45.498 [2024-07-16 00:41:58.887001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.498 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.498 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.498 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.759 Nvme0n1 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.759 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.759 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.759 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.759 [2024-07-16 00:41:59.269506] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.759 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.759 [ 00:30:45.759 { 00:30:45.759 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:45.759 "subtype": "Discovery", 00:30:45.759 "listen_addresses": [], 00:30:45.759 "allow_any_host": true, 00:30:45.759 "hosts": [] 00:30:45.759 }, 00:30:45.759 { 00:30:45.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.759 "subtype": "NVMe", 00:30:45.759 "listen_addresses": [ 00:30:45.759 { 00:30:45.759 "trtype": "TCP", 00:30:45.759 "adrfam": "IPv4", 00:30:45.759 "traddr": "10.0.0.2", 00:30:45.759 "trsvcid": "4420" 00:30:45.759 } 00:30:45.759 ], 00:30:45.759 "allow_any_host": true, 00:30:45.759 "hosts": [], 00:30:45.759 "serial_number": "SPDK00000000000001", 00:30:45.759 "model_number": "SPDK bdev Controller", 00:30:45.759 "max_namespaces": 1, 00:30:45.759 "min_cntlid": 1, 00:30:45.759 "max_cntlid": 65519, 00:30:45.759 "namespaces": [ 00:30:45.759 { 00:30:45.759 "nsid": 1, 00:30:45.759 "bdev_name": "Nvme0n1", 00:30:45.759 "name": "Nvme0n1", 00:30:45.759 "nguid": "3634473052605494002538450000002B", 00:30:45.759 "uuid": "36344730-5260-5494-0025-38450000002b" 00:30:45.759 } 00:30:45.759 ] 00:30:45.759 } 00:30:45.759 ] 00:30:45.759 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.759 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:45.759 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:45.759 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:45.759 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:46.020 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:46.020 00:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:46.020 rmmod nvme_tcp 00:30:46.020 rmmod nvme_fabrics 00:30:46.020 rmmod nvme_keyring 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1302633 ']' 00:30:46.020 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1302633 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1302633 ']' 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1302633 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:46.020 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1302633 00:30:46.281 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:46.281 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:46.281 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1302633' 00:30:46.281 killing process with pid 1302633 00:30:46.281 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1302633 00:30:46.281 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1302633 00:30:46.542 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:46.542 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:46.542 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:46.542 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.542 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:46.542 00:41:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.542 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:46.542 00:41:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.460 00:42:02 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:48.460 00:30:48.460 real 0m13.342s 00:30:48.460 user 0m9.668s 00:30:48.460 sys 0m6.676s 00:30:48.460 00:42:02 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:48.460 00:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.460 ************************************ 00:30:48.460 END TEST nvmf_identify_passthru 00:30:48.460 ************************************ 00:30:48.460 00:42:02 -- common/autotest_common.sh@1142 -- # return 0 00:30:48.460 00:42:02 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:48.460 00:42:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:48.460 00:42:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:48.460 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:30:48.720 ************************************ 00:30:48.720 START TEST nvmf_dif 00:30:48.720 ************************************ 00:30:48.720 00:42:02 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:48.720 * Looking for test storage... 00:30:48.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.720 00:42:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.720 00:42:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.721 00:42:02 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.721 00:42:02 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.721 00:42:02 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.721 00:42:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.721 00:42:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.721 00:42:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.721 00:42:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:48.721 00:42:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:48.721 00:42:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:48.721 00:42:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:48.721 00:42:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:48.721 00:42:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:48.721 00:42:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.721 00:42:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:48.721 00:42:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:48.721 00:42:02 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:48.721 00:42:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:56.873 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:56.873 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:56.873 Found net devices under 0000:31:00.0: cvl_0_0 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:56.873 Found net devices under 0000:31:00.1: cvl_0_1 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.873 00:42:09 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:56.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:30:56.873 00:30:56.873 --- 10.0.0.2 ping statistics --- 00:30:56.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.873 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:30:56.873 00:42:10 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:30:56.874 00:30:56.874 --- 10.0.0.1 ping statistics --- 00:30:56.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.874 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:30:56.874 00:42:10 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.874 00:42:10 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:56.874 00:42:10 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:56.874 00:42:10 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:01.079 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:01.079 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:01.079 00:42:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:01.079 00:42:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1309790 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1309790 00:31:01.079 00:42:14 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1309790 ']' 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.079 00:42:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.079 [2024-07-16 00:42:14.290664] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:31:01.079 [2024-07-16 00:42:14.290730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.079 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.079 [2024-07-16 00:42:14.371685] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.079 [2024-07-16 00:42:14.444895] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.079 [2024-07-16 00:42:14.444936] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.079 [2024-07-16 00:42:14.444945] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.079 [2024-07-16 00:42:14.444953] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.079 [2024-07-16 00:42:14.444959] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.079 [2024-07-16 00:42:14.444980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:01.650 00:42:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.650 00:42:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.650 00:42:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:01.650 00:42:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.650 [2024-07-16 00:42:15.112287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.650 00:42:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.650 00:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.650 ************************************ 00:31:01.650 START TEST fio_dif_1_default 00:31:01.650 ************************************ 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:01.650 bdev_null0 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:01.650 [2024-07-16 00:42:15.184583] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.650 { 00:31:01.650 "params": { 00:31:01.650 "name": "Nvme$subsystem", 00:31:01.650 "trtype": "$TEST_TRANSPORT", 00:31:01.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.650 "adrfam": "ipv4", 00:31:01.650 "trsvcid": "$NVMF_PORT", 00:31:01.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.650 "hdgst": ${hdgst:-false}, 00:31:01.650 "ddgst": ${ddgst:-false} 00:31:01.650 }, 00:31:01.650 "method": "bdev_nvme_attach_controller" 00:31:01.650 } 00:31:01.650 EOF 00:31:01.650 )") 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.650 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:01.651 "params": { 00:31:01.651 "name": "Nvme0", 00:31:01.651 "trtype": "tcp", 00:31:01.651 "traddr": "10.0.0.2", 00:31:01.651 "adrfam": "ipv4", 00:31:01.651 "trsvcid": "4420", 00:31:01.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.651 "hdgst": false, 00:31:01.651 "ddgst": false 00:31:01.651 }, 00:31:01.651 "method": "bdev_nvme_attach_controller" 00:31:01.651 }' 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.651 00:42:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.220 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:02.220 fio-3.35 00:31:02.220 Starting 1 thread 00:31:02.220 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.498 00:31:14.498 filename0: (groupid=0, jobs=1): err= 0: pid=1310321: Tue Jul 16 00:42:26 2024 00:31:14.498 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10035msec) 00:31:14.498 slat (nsec): min=5384, max=31958, avg=6104.76, stdev=1356.29 00:31:14.498 clat (usec): min=544, max=41940, avg=21154.13, stdev=20123.23 00:31:14.498 lat (usec): min=549, max=41972, avg=21160.23, stdev=20123.22 00:31:14.498 clat percentiles (usec): 00:31:14.498 | 1.00th=[ 693], 5.00th=[ 816], 10.00th=[ 840], 20.00th=[ 865], 00:31:14.498 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[41157], 60.00th=[41157], 00:31:14.498 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:14.498 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:31:14.498 | 99.99th=[41681] 00:31:14.498 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=756.80, stdev=28.00, samples=20 00:31:14.498 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:31:14.498 lat (usec) : 750=1.69%, 1000=47.89% 00:31:14.498 lat (msec) : 50=50.42% 00:31:14.498 cpu : usr=95.26%, sys=4.55%, ctx=11, majf=0, minf=223 00:31:14.498 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.498 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.498 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:14.498 00:31:14.498 Run status group 0 (all jobs): 00:31:14.498 READ: bw=756KiB/s (774kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=7584KiB (7766kB), run=10035-10035msec 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.498 00:31:14.498 real 0m11.120s 00:31:14.498 user 0m22.515s 00:31:14.498 sys 0m0.765s 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 ************************************ 00:31:14.498 END TEST fio_dif_1_default 00:31:14.498 ************************************ 00:31:14.498 00:42:26 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:14.498 00:42:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:14.498 00:42:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:14.498 00:42:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 ************************************ 00:31:14.498 START TEST fio_dif_1_multi_subsystems 00:31:14.498 ************************************ 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 bdev_null0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.498 [2024-07-16 00:42:26.384066] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:14.498 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.499 bdev_null1 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:14.499 { 00:31:14.499 "params": { 00:31:14.499 "name": "Nvme$subsystem", 00:31:14.499 "trtype": "$TEST_TRANSPORT", 00:31:14.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.499 "adrfam": "ipv4", 00:31:14.499 "trsvcid": "$NVMF_PORT", 00:31:14.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.499 "hdgst": ${hdgst:-false}, 00:31:14.499 "ddgst": ${ddgst:-false} 00:31:14.499 }, 00:31:14.499 "method": "bdev_nvme_attach_controller" 00:31:14.499 } 00:31:14.499 EOF 00:31:14.499 )") 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:14.499 { 00:31:14.499 "params": { 00:31:14.499 "name": "Nvme$subsystem", 00:31:14.499 "trtype": "$TEST_TRANSPORT", 00:31:14.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.499 "adrfam": "ipv4", 00:31:14.499 "trsvcid": "$NVMF_PORT", 00:31:14.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.499 "hdgst": ${hdgst:-false}, 00:31:14.499 "ddgst": ${ddgst:-false} 00:31:14.499 }, 00:31:14.499 "method": "bdev_nvme_attach_controller" 00:31:14.499 } 00:31:14.499 EOF 00:31:14.499 )") 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:14.499 "params": { 00:31:14.499 "name": "Nvme0", 00:31:14.499 "trtype": "tcp", 00:31:14.499 "traddr": "10.0.0.2", 00:31:14.499 "adrfam": "ipv4", 00:31:14.499 "trsvcid": "4420", 00:31:14.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.499 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.499 "hdgst": false, 00:31:14.499 "ddgst": false 00:31:14.499 }, 00:31:14.499 "method": "bdev_nvme_attach_controller" 00:31:14.499 },{ 00:31:14.499 "params": { 00:31:14.499 "name": "Nvme1", 00:31:14.499 "trtype": "tcp", 00:31:14.499 "traddr": "10.0.0.2", 00:31:14.499 "adrfam": "ipv4", 00:31:14.499 "trsvcid": "4420", 00:31:14.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.499 "hdgst": false, 00:31:14.499 "ddgst": false 00:31:14.499 }, 00:31:14.499 "method": "bdev_nvme_attach_controller" 00:31:14.499 }' 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:14.499 00:42:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.499 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:14.499 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:14.499 fio-3.35 00:31:14.499 Starting 2 threads 00:31:14.499 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.526 00:31:24.526 filename0: (groupid=0, jobs=1): err= 0: pid=1312808: Tue Jul 16 00:42:37 2024 00:31:24.526 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10028msec) 00:31:24.526 slat (nsec): min=5391, max=53573, avg=6353.35, stdev=2182.78 00:31:24.526 clat (usec): min=40928, max=43050, avg=41936.74, stdev=251.70 00:31:24.526 lat (usec): min=40934, max=43066, avg=41943.09, stdev=251.97 00:31:24.526 clat percentiles (usec): 00:31:24.526 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:31:24.526 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:24.526 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:24.526 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:24.526 | 99.99th=[43254] 00:31:24.526 bw ( KiB/s): min= 352, max= 384, per=49.31%, avg=380.80, stdev= 9.85, samples=20 00:31:24.526 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:24.526 lat (msec) : 50=100.00% 00:31:24.526 cpu : usr=96.85%, sys=2.93%, ctx=14, majf=0, minf=163 00:31:24.526 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.526 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.526 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:24.526 filename1: (groupid=0, jobs=1): err= 0: pid=1312809: Tue Jul 16 00:42:37 2024 00:31:24.526 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:31:24.526 slat (nsec): min=5385, max=41034, avg=6243.46, stdev=1639.10 00:31:24.526 clat (usec): min=40860, max=41953, avg=40986.77, stdev=65.49 00:31:24.526 lat (usec): min=40866, max=41994, avg=40993.01, stdev=66.28 00:31:24.526 clat percentiles (usec): 00:31:24.526 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:24.526 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:24.526 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:24.526 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:24.526 | 99.99th=[42206] 00:31:24.526 bw ( KiB/s): min= 384, max= 416, per=50.48%, avg=389.05, stdev=11.99, samples=19 00:31:24.526 iops : min= 96, max= 104, avg=97.26, stdev= 3.00, samples=19 00:31:24.526 lat (msec) : 50=100.00% 00:31:24.526 cpu : usr=97.30%, sys=2.49%, ctx=15, majf=0, minf=95 00:31:24.526 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.526 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.526 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:24.526 00:31:24.526 Run status group 0 (all jobs): 00:31:24.526 READ: bw=771KiB/s (789kB/s), 381KiB/s-390KiB/s (390kB/s-400kB/s), io=7728KiB (7913kB), run=10005-10028msec 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.526 00:31:24.526 real 0m11.374s 00:31:24.526 user 0m32.352s 00:31:24.526 sys 0m0.921s 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 ************************************ 00:31:24.526 END TEST fio_dif_1_multi_subsystems 00:31:24.526 ************************************ 00:31:24.526 00:42:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:24.526 00:42:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:24.526 00:42:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:24.526 00:42:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 ************************************ 00:31:24.526 START TEST fio_dif_rand_params 00:31:24.526 ************************************ 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 bdev_null0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.526 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:24.527 [2024-07-16 00:42:37.814778] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:24.527 { 00:31:24.527 "params": { 00:31:24.527 "name": "Nvme$subsystem", 00:31:24.527 "trtype": "$TEST_TRANSPORT", 00:31:24.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.527 "adrfam": "ipv4", 00:31:24.527 "trsvcid": "$NVMF_PORT", 00:31:24.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.527 "hdgst": ${hdgst:-false}, 00:31:24.527 "ddgst": ${ddgst:-false} 00:31:24.527 }, 00:31:24.527 "method": "bdev_nvme_attach_controller" 00:31:24.527 } 00:31:24.527 EOF 00:31:24.527 )") 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:24.527 "params": { 00:31:24.527 "name": "Nvme0", 00:31:24.527 "trtype": "tcp", 00:31:24.527 "traddr": "10.0.0.2", 00:31:24.527 "adrfam": "ipv4", 00:31:24.527 "trsvcid": "4420", 00:31:24.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.527 "hdgst": false, 00:31:24.527 "ddgst": false 00:31:24.527 }, 00:31:24.527 "method": "bdev_nvme_attach_controller" 00:31:24.527 }' 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.527 00:42:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.790 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:24.790 ... 00:31:24.790 fio-3.35 00:31:24.790 Starting 3 threads 00:31:24.790 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.375 00:31:31.375 filename0: (groupid=0, jobs=1): err= 0: pid=1315005: Tue Jul 16 00:42:43 2024 00:31:31.375 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(123MiB/5045msec) 00:31:31.375 slat (nsec): min=7880, max=45340, avg=8735.67, stdev=1661.20 00:31:31.375 clat (usec): min=6061, max=93086, avg=15356.50, stdev=13422.89 00:31:31.375 lat (usec): min=6069, max=93095, avg=15365.24, stdev=13422.84 00:31:31.375 clat percentiles (usec): 00:31:31.375 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8979], 00:31:31.375 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10945], 60.00th=[11600], 00:31:31.375 | 70.00th=[12256], 80.00th=[13698], 90.00th=[49546], 95.00th=[51643], 00:31:31.375 | 99.00th=[54264], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:31:31.375 | 99.99th=[92799] 00:31:31.375 bw ( KiB/s): min=17920, max=31488, per=32.86%, avg=25062.40, stdev=4287.86, samples=10 00:31:31.375 iops : min= 140, max= 246, avg=195.80, stdev=33.50, samples=10 00:31:31.375 lat (msec) : 10=35.34%, 20=53.36%, 50=2.24%, 100=9.06% 00:31:31.375 cpu : usr=95.64%, sys=3.85%, ctx=236, majf=0, minf=83 00:31:31.375 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.375 issued rwts: total=982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.375 filename0: (groupid=0, jobs=1): err= 0: pid=1315006: Tue Jul 16 00:42:43 2024 00:31:31.375 read: IOPS=185, BW=23.2MiB/s (24.3MB/s)(116MiB/5010msec) 00:31:31.375 slat (nsec): min=5469, max=32586, avg=8144.83, stdev=1566.45 00:31:31.375 clat (usec): min=6254, max=92735, avg=16162.05, stdev=13859.31 00:31:31.375 lat (usec): min=6260, max=92744, avg=16170.19, stdev=13859.50 00:31:31.375 clat percentiles (usec): 00:31:31.375 | 1.00th=[ 6915], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 9241], 00:31:31.375 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11207], 60.00th=[11994], 00:31:31.375 | 70.00th=[13042], 80.00th=[14353], 90.00th=[49546], 95.00th=[51643], 00:31:31.375 | 99.00th=[53740], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:31:31.375 | 99.99th=[92799] 00:31:31.375 bw ( KiB/s): min=16128, max=34048, per=31.08%, avg=23705.60, stdev=5061.64, samples=10 00:31:31.375 iops : min= 126, max= 266, avg=185.20, stdev=39.54, samples=10 00:31:31.375 lat (msec) : 10=33.05%, 20=54.25%, 50=3.55%, 100=9.15% 00:31:31.375 cpu : usr=96.37%, sys=3.37%, ctx=10, majf=0, minf=80 00:31:31.375 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.375 issued rwts: total=929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.375 filename0: (groupid=0, jobs=1): err= 0: pid=1315007: Tue Jul 16 00:42:43 2024 00:31:31.375 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5010msec) 00:31:31.375 slat (nsec): min=5424, max=31335, avg=7844.62, stdev=1865.53 00:31:31.375 clat (usec): min=5881, max=90546, avg=13715.62, stdev=11367.08 00:31:31.375 lat (usec): min=5887, max=90555, avg=13723.47, stdev=11367.09 00:31:31.375 clat percentiles (usec): 00:31:31.375 | 1.00th=[ 6390], 5.00th=[ 7177], 10.00th=[ 7963], 20.00th=[ 8848], 00:31:31.375 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10683], 60.00th=[11338], 00:31:31.375 | 70.00th=[12125], 80.00th=[12780], 90.00th=[14615], 95.00th=[50070], 00:31:31.375 | 99.00th=[53216], 99.50th=[53740], 99.90th=[88605], 99.95th=[90702], 00:31:31.375 | 99.99th=[90702] 00:31:31.375 bw ( KiB/s): min=17152, max=35328, per=36.66%, avg=27960.40, stdev=6034.65, samples=10 00:31:31.375 iops : min= 134, max= 276, avg=218.40, stdev=47.16, samples=10 00:31:31.375 lat (msec) : 10=40.55%, 20=51.69%, 50=2.01%, 100=5.75% 00:31:31.375 cpu : usr=96.09%, sys=3.65%, ctx=14, majf=0, minf=105 00:31:31.375 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.376 issued rwts: total=1095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.376 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.376 00:31:31.376 Run status group 0 (all jobs): 00:31:31.376 READ: bw=74.5MiB/s (78.1MB/s), 23.2MiB/s-27.3MiB/s (24.3MB/s-28.6MB/s), io=376MiB (394MB), run=5010-5045msec 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 bdev_null0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 [2024-07-16 00:42:44.081392] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 bdev_null1 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 bdev_null2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:31.376 { 00:31:31.376 "params": { 00:31:31.376 "name": "Nvme$subsystem", 00:31:31.376 "trtype": "$TEST_TRANSPORT", 00:31:31.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.376 "adrfam": "ipv4", 00:31:31.376 "trsvcid": "$NVMF_PORT", 00:31:31.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.376 "hdgst": ${hdgst:-false}, 00:31:31.376 "ddgst": ${ddgst:-false} 00:31:31.376 }, 00:31:31.376 "method": "bdev_nvme_attach_controller" 00:31:31.376 } 00:31:31.376 EOF 00:31:31.376 )") 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:31.376 { 00:31:31.376 "params": { 00:31:31.376 "name": "Nvme$subsystem", 00:31:31.376 "trtype": "$TEST_TRANSPORT", 00:31:31.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.376 "adrfam": "ipv4", 00:31:31.376 "trsvcid": "$NVMF_PORT", 00:31:31.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.376 "hdgst": ${hdgst:-false}, 00:31:31.376 "ddgst": ${ddgst:-false} 00:31:31.376 }, 00:31:31.376 "method": "bdev_nvme_attach_controller" 00:31:31.376 } 00:31:31.376 EOF 00:31:31.376 )") 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:31.376 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:31.377 { 00:31:31.377 "params": { 00:31:31.377 "name": "Nvme$subsystem", 00:31:31.377 "trtype": "$TEST_TRANSPORT", 00:31:31.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.377 "adrfam": "ipv4", 00:31:31.377 "trsvcid": "$NVMF_PORT", 00:31:31.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.377 "hdgst": ${hdgst:-false}, 00:31:31.377 "ddgst": ${ddgst:-false} 00:31:31.377 }, 00:31:31.377 "method": "bdev_nvme_attach_controller" 00:31:31.377 } 00:31:31.377 EOF 00:31:31.377 )") 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:31.377 "params": { 00:31:31.377 "name": "Nvme0", 00:31:31.377 "trtype": "tcp", 00:31:31.377 "traddr": "10.0.0.2", 00:31:31.377 "adrfam": "ipv4", 00:31:31.377 "trsvcid": "4420", 00:31:31.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:31.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:31.377 "hdgst": false, 00:31:31.377 "ddgst": false 00:31:31.377 }, 00:31:31.377 "method": "bdev_nvme_attach_controller" 00:31:31.377 },{ 00:31:31.377 "params": { 00:31:31.377 "name": "Nvme1", 00:31:31.377 "trtype": "tcp", 00:31:31.377 "traddr": "10.0.0.2", 00:31:31.377 "adrfam": "ipv4", 00:31:31.377 "trsvcid": "4420", 00:31:31.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:31.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:31.377 "hdgst": false, 00:31:31.377 "ddgst": false 00:31:31.377 }, 00:31:31.377 "method": "bdev_nvme_attach_controller" 00:31:31.377 },{ 00:31:31.377 "params": { 00:31:31.377 "name": "Nvme2", 00:31:31.377 "trtype": "tcp", 00:31:31.377 "traddr": "10.0.0.2", 00:31:31.377 "adrfam": "ipv4", 00:31:31.377 "trsvcid": "4420", 00:31:31.377 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:31.377 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:31.377 "hdgst": false, 00:31:31.377 "ddgst": false 00:31:31.377 }, 00:31:31.377 "method": "bdev_nvme_attach_controller" 00:31:31.377 }' 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:31.377 00:42:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.377 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:31.377 ... 00:31:31.377 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:31.377 ... 00:31:31.377 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:31.377 ... 00:31:31.377 fio-3.35 00:31:31.377 Starting 24 threads 00:31:31.377 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.631 00:31:43.631 filename0: (groupid=0, jobs=1): err= 0: pid=1316507: Tue Jul 16 00:42:55 2024 00:31:43.631 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10028msec) 00:31:43.631 slat (nsec): min=5417, max=68168, avg=10338.16, stdev=7734.31 00:31:43.631 clat (usec): min=1286, max=49579, avg=30595.55, stdev=6350.88 00:31:43.631 lat (usec): min=1295, max=49626, avg=30605.89, stdev=6350.62 00:31:43.631 clat percentiles (usec): 00:31:43.631 | 1.00th=[ 1991], 5.00th=[20055], 10.00th=[23200], 20.00th=[31589], 00:31:43.631 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.631 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:43.631 | 99.00th=[34866], 99.50th=[38536], 99.90th=[49546], 99.95th=[49546], 00:31:43.631 | 99.99th=[49546] 00:31:43.631 bw ( KiB/s): min= 1920, max= 3600, per=4.40%, avg=2085.60, stdev=369.53, samples=20 00:31:43.631 iops : min= 480, max= 900, avg=521.40, stdev=92.38, samples=20 00:31:43.631 lat (msec) : 2=1.09%, 4=1.66%, 10=0.73%, 20=1.47%, 50=95.05% 00:31:43.631 cpu : usr=99.18%, sys=0.51%, ctx=9, majf=0, minf=27 00:31:43.631 IO depths : 1=5.4%, 2=10.9%, 4=22.5%, 8=53.8%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:43.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 issued rwts: total=5230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.631 filename0: (groupid=0, jobs=1): err= 0: pid=1316508: Tue Jul 16 00:42:55 2024 00:31:43.631 read: IOPS=484, BW=1939KiB/s (1985kB/s)(18.9MiB/10002msec) 00:31:43.631 slat (nsec): min=5427, max=81710, avg=16665.80, stdev=11916.20 00:31:43.631 clat (usec): min=15881, max=52986, avg=32893.81, stdev=4446.40 00:31:43.631 lat (usec): min=15887, max=52996, avg=32910.47, stdev=4447.03 00:31:43.631 clat percentiles (usec): 00:31:43.631 | 1.00th=[20055], 5.00th=[25035], 10.00th=[30802], 20.00th=[31851], 00:31:43.631 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:31:43.631 | 70.00th=[33162], 80.00th=[33817], 90.00th=[35914], 95.00th=[42730], 00:31:43.631 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[52691], 00:31:43.631 | 99.99th=[53216] 00:31:43.631 bw ( KiB/s): min= 1816, max= 2048, per=4.09%, avg=1940.21, stdev=59.53, samples=19 00:31:43.631 iops : min= 454, max= 512, avg=485.05, stdev=14.88, samples=19 00:31:43.631 lat (msec) : 20=0.87%, 50=98.68%, 100=0.45% 00:31:43.631 cpu : usr=98.98%, sys=0.69%, ctx=18, majf=0, minf=25 00:31:43.631 IO depths : 1=2.2%, 2=4.5%, 4=14.6%, 8=66.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:43.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 complete : 0=0.0%, 4=91.9%, 8=4.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.631 filename0: (groupid=0, jobs=1): err= 0: pid=1316509: Tue Jul 16 00:42:55 2024 00:31:43.631 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10005msec) 00:31:43.631 slat (nsec): min=5408, max=94329, avg=19138.22, stdev=13325.09 00:31:43.631 clat (usec): min=5919, max=53721, avg=32632.09, stdev=3200.88 00:31:43.631 lat (usec): min=5942, max=53738, avg=32651.23, stdev=3201.12 00:31:43.631 clat percentiles (usec): 00:31:43.631 | 1.00th=[18744], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:43.631 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.631 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:43.631 | 99.00th=[46924], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:31:43.631 | 99.99th=[53740] 00:31:43.631 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1943.58, stdev=66.90, samples=19 00:31:43.631 iops : min= 448, max= 512, avg=485.89, stdev=16.73, samples=19 00:31:43.631 lat (msec) : 10=0.12%, 20=1.11%, 50=98.20%, 100=0.57% 00:31:43.631 cpu : usr=99.06%, sys=0.60%, ctx=13, majf=0, minf=38 00:31:43.631 IO depths : 1=3.0%, 2=8.0%, 4=22.0%, 8=57.1%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:43.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 issued rwts: total=4882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.631 filename0: (groupid=0, jobs=1): err= 0: pid=1316510: Tue Jul 16 00:42:55 2024 00:31:43.631 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10013msec) 00:31:43.631 slat (nsec): min=5408, max=72646, avg=12555.18, stdev=10150.70 00:31:43.631 clat (usec): min=15316, max=54547, avg=32332.77, stdev=3440.29 00:31:43.631 lat (usec): min=15325, max=54559, avg=32345.33, stdev=3440.85 00:31:43.631 clat percentiles (usec): 00:31:43.631 | 1.00th=[20317], 5.00th=[25822], 10.00th=[31065], 20.00th=[31851], 00:31:43.631 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.631 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:43.631 | 99.00th=[46400], 99.50th=[50594], 99.90th=[52167], 99.95th=[54789], 00:31:43.631 | 99.99th=[54789] 00:31:43.631 bw ( KiB/s): min= 1792, max= 2176, per=4.15%, avg=1969.60, stdev=85.60, samples=20 00:31:43.631 iops : min= 448, max= 544, avg=492.40, stdev=21.40, samples=20 00:31:43.631 lat (msec) : 20=0.71%, 50=98.72%, 100=0.57% 00:31:43.631 cpu : usr=99.11%, sys=0.57%, ctx=14, majf=0, minf=22 00:31:43.631 IO depths : 1=3.7%, 2=9.0%, 4=22.6%, 8=55.9%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:43.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.631 filename0: (groupid=0, jobs=1): err= 0: pid=1316511: Tue Jul 16 00:42:55 2024 00:31:43.631 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10003msec) 00:31:43.631 slat (usec): min=7, max=104, avg=22.33, stdev=16.64 00:31:43.631 clat (usec): min=19554, max=46801, avg=32390.16, stdev=2184.12 00:31:43.631 lat (usec): min=19563, max=46811, avg=32412.48, stdev=2184.87 00:31:43.631 clat percentiles (usec): 00:31:43.631 | 1.00th=[21365], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:43.631 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:43.631 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:43.631 | 99.00th=[39060], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:31:43.631 | 99.99th=[46924] 00:31:43.631 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1960.42, stdev=59.48, samples=19 00:31:43.631 iops : min= 480, max= 512, avg=490.11, stdev=14.87, samples=19 00:31:43.631 lat (msec) : 20=0.33%, 50=99.67% 00:31:43.631 cpu : usr=97.98%, sys=1.05%, ctx=54, majf=0, minf=31 00:31:43.631 IO depths : 1=5.7%, 2=11.7%, 4=24.3%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:43.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.631 filename0: (groupid=0, jobs=1): err= 0: pid=1316512: Tue Jul 16 00:42:55 2024 00:31:43.631 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10015msec) 00:31:43.631 slat (nsec): min=5413, max=91260, avg=13453.48, stdev=11009.22 00:31:43.631 clat (usec): min=18973, max=34962, avg=32509.59, stdev=1389.48 00:31:43.631 lat (usec): min=18985, max=34973, avg=32523.04, stdev=1388.37 00:31:43.631 clat percentiles (usec): 00:31:43.631 | 1.00th=[26870], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:43.631 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.631 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:43.631 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:31:43.631 | 99.99th=[34866] 00:31:43.631 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1958.40, stdev=60.18, samples=20 00:31:43.631 iops : min= 480, max= 512, avg=489.60, stdev=15.05, samples=20 00:31:43.631 lat (msec) : 20=0.45%, 50=99.55% 00:31:43.631 cpu : usr=99.15%, sys=0.53%, ctx=13, majf=0, minf=23 00:31:43.631 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:43.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.631 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.631 filename0: (groupid=0, jobs=1): err= 0: pid=1316513: Tue Jul 16 00:42:55 2024 00:31:43.631 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10003msec) 00:31:43.631 slat (usec): min=5, max=151, avg=21.07, stdev=19.86 00:31:43.631 clat (usec): min=13669, max=72416, avg=32774.02, stdev=4261.29 00:31:43.631 lat (usec): min=13675, max=72431, avg=32795.09, stdev=4262.74 00:31:43.631 clat percentiles (usec): 00:31:43.631 | 1.00th=[18744], 5.00th=[28705], 10.00th=[31589], 20.00th=[31851], 00:31:43.632 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[38536], 00:31:43.632 | 99.00th=[50070], 99.50th=[52691], 99.90th=[72877], 99.95th=[72877], 00:31:43.632 | 99.99th=[72877] 00:31:43.632 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1936.00, stdev=75.61, samples=19 00:31:43.632 iops : min= 448, max= 512, avg=484.00, stdev=18.90, samples=19 00:31:43.632 lat (msec) : 20=1.28%, 50=97.74%, 100=0.99% 00:31:43.632 cpu : usr=99.21%, sys=0.45%, ctx=39, majf=0, minf=23 00:31:43.632 IO depths : 1=2.2%, 2=5.0%, 4=14.8%, 8=67.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=91.4%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename0: (groupid=0, jobs=1): err= 0: pid=1316514: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10012msec) 00:31:43.632 slat (usec): min=5, max=114, avg=21.40, stdev=14.41 00:31:43.632 clat (usec): min=17127, max=53770, avg=32363.73, stdev=2943.24 00:31:43.632 lat (usec): min=17136, max=53779, avg=32385.13, stdev=2943.84 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[20317], 5.00th=[28181], 10.00th=[31327], 20.00th=[31851], 00:31:43.632 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:43.632 | 99.00th=[43779], 99.50th=[47973], 99.90th=[53740], 99.95th=[53740], 00:31:43.632 | 99.99th=[53740] 00:31:43.632 bw ( KiB/s): min= 1795, max= 2096, per=4.14%, avg=1961.75, stdev=76.02, samples=20 00:31:43.632 iops : min= 448, max= 524, avg=490.40, stdev=19.09, samples=20 00:31:43.632 lat (msec) : 20=0.47%, 50=99.21%, 100=0.33% 00:31:43.632 cpu : usr=98.73%, sys=0.66%, ctx=32, majf=0, minf=23 00:31:43.632 IO depths : 1=5.5%, 2=11.0%, 4=22.5%, 8=53.8%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename1: (groupid=0, jobs=1): err= 0: pid=1316515: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10008msec) 00:31:43.632 slat (usec): min=5, max=113, avg=16.93, stdev=14.41 00:31:43.632 clat (usec): min=8502, max=77889, avg=32537.42, stdev=2504.28 00:31:43.632 lat (usec): min=8512, max=77906, avg=32554.34, stdev=2504.08 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:43.632 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:43.632 | 99.00th=[34866], 99.50th=[35390], 99.90th=[56886], 99.95th=[56886], 00:31:43.632 | 99.99th=[78119] 00:31:43.632 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1946.95, stdev=68.52, samples=19 00:31:43.632 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:31:43.632 lat (msec) : 10=0.33%, 20=0.37%, 50=98.98%, 100=0.33% 00:31:43.632 cpu : usr=99.01%, sys=0.60%, ctx=46, majf=0, minf=23 00:31:43.632 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename1: (groupid=0, jobs=1): err= 0: pid=1316516: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:31:43.632 slat (usec): min=5, max=100, avg=22.78, stdev=15.31 00:31:43.632 clat (usec): min=15671, max=46811, avg=32495.42, stdev=1570.33 00:31:43.632 lat (usec): min=15679, max=46833, avg=32518.21, stdev=1570.24 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:43.632 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:31:43.632 | 99.00th=[34866], 99.50th=[35914], 99.90th=[46924], 99.95th=[46924], 00:31:43.632 | 99.99th=[46924] 00:31:43.632 bw ( KiB/s): min= 1795, max= 2052, per=4.12%, avg=1953.50, stdev=70.58, samples=20 00:31:43.632 iops : min= 448, max= 513, avg=488.15, stdev=17.82, samples=20 00:31:43.632 lat (msec) : 20=0.33%, 50=99.67% 00:31:43.632 cpu : usr=99.15%, sys=0.53%, ctx=13, majf=0, minf=20 00:31:43.632 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename1: (groupid=0, jobs=1): err= 0: pid=1316517: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10005msec) 00:31:43.632 slat (nsec): min=5436, max=94108, avg=21783.10, stdev=15296.91 00:31:43.632 clat (usec): min=5587, max=54012, avg=32559.35, stdev=2718.95 00:31:43.632 lat (usec): min=5593, max=54031, avg=32581.13, stdev=2719.03 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[26608], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:43.632 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:43.632 | 99.00th=[42730], 99.50th=[49021], 99.90th=[53740], 99.95th=[53740], 00:31:43.632 | 99.99th=[54264] 00:31:43.632 bw ( KiB/s): min= 1792, max= 2032, per=4.10%, avg=1944.42, stdev=62.28, samples=19 00:31:43.632 iops : min= 448, max= 508, avg=486.11, stdev=15.57, samples=19 00:31:43.632 lat (msec) : 10=0.29%, 20=0.37%, 50=98.94%, 100=0.41% 00:31:43.632 cpu : usr=98.05%, sys=0.94%, ctx=115, majf=0, minf=28 00:31:43.632 IO depths : 1=0.3%, 2=6.2%, 4=23.7%, 8=57.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=94.1%, 8=0.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename1: (groupid=0, jobs=1): err= 0: pid=1316518: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10009msec) 00:31:43.632 slat (nsec): min=5481, max=96636, avg=20613.37, stdev=14046.31 00:31:43.632 clat (usec): min=20675, max=54601, avg=32636.92, stdev=2579.74 00:31:43.632 lat (usec): min=20685, max=54624, avg=32657.53, stdev=2579.72 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[22676], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:43.632 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:43.632 | 99.00th=[43254], 99.50th=[47449], 99.90th=[54789], 99.95th=[54789], 00:31:43.632 | 99.99th=[54789] 00:31:43.632 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1947.11, stdev=62.08, samples=19 00:31:43.632 iops : min= 448, max= 512, avg=486.74, stdev=15.62, samples=19 00:31:43.632 lat (msec) : 50=99.88%, 100=0.12% 00:31:43.632 cpu : usr=98.98%, sys=0.61%, ctx=15, majf=0, minf=25 00:31:43.632 IO depths : 1=5.1%, 2=10.3%, 4=22.2%, 8=55.0%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename1: (groupid=0, jobs=1): err= 0: pid=1316519: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10018msec) 00:31:43.632 slat (nsec): min=5396, max=92628, avg=15977.35, stdev=14013.56 00:31:43.632 clat (usec): min=14907, max=59869, avg=31873.34, stdev=4245.38 00:31:43.632 lat (usec): min=14913, max=59946, avg=31889.32, stdev=4247.71 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[19792], 5.00th=[22676], 10.00th=[26346], 20.00th=[31589], 00:31:43.632 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35914], 00:31:43.632 | 99.00th=[44827], 99.50th=[51119], 99.90th=[59507], 99.95th=[60031], 00:31:43.632 | 99.99th=[60031] 00:31:43.632 bw ( KiB/s): min= 1904, max= 2384, per=4.21%, avg=1997.60, stdev=111.85, samples=20 00:31:43.632 iops : min= 476, max= 596, avg=499.40, stdev=27.96, samples=20 00:31:43.632 lat (msec) : 20=1.08%, 50=98.28%, 100=0.64% 00:31:43.632 cpu : usr=99.22%, sys=0.45%, ctx=12, majf=0, minf=26 00:31:43.632 IO depths : 1=4.1%, 2=8.6%, 4=19.7%, 8=58.8%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=5010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename1: (groupid=0, jobs=1): err= 0: pid=1316520: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10001msec) 00:31:43.632 slat (nsec): min=5404, max=71287, avg=15347.23, stdev=11594.28 00:31:43.632 clat (usec): min=5527, max=54294, avg=32276.25, stdev=4528.55 00:31:43.632 lat (usec): min=5533, max=54311, avg=32291.60, stdev=4529.04 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[19792], 5.00th=[23725], 10.00th=[27132], 20.00th=[31851], 00:31:43.632 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[39584], 00:31:43.632 | 99.00th=[47973], 99.50th=[50594], 99.90th=[54264], 99.95th=[54264], 00:31:43.632 | 99.99th=[54264] 00:31:43.632 bw ( KiB/s): min= 1792, max= 2240, per=4.14%, avg=1965.47, stdev=98.38, samples=19 00:31:43.632 iops : min= 448, max= 560, avg=491.37, stdev=24.59, samples=19 00:31:43.632 lat (msec) : 10=0.04%, 20=1.36%, 50=97.91%, 100=0.69% 00:31:43.632 cpu : usr=99.25%, sys=0.43%, ctx=13, majf=0, minf=24 00:31:43.632 IO depths : 1=2.7%, 2=6.4%, 4=17.2%, 8=62.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:43.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 complete : 0=0.0%, 4=92.3%, 8=3.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.632 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.632 filename1: (groupid=0, jobs=1): err= 0: pid=1316521: Tue Jul 16 00:42:55 2024 00:31:43.632 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10003msec) 00:31:43.632 slat (nsec): min=5410, max=85456, avg=10425.17, stdev=8084.27 00:31:43.632 clat (usec): min=18712, max=40656, avg=32494.30, stdev=1640.56 00:31:43.632 lat (usec): min=18719, max=40692, avg=32504.73, stdev=1639.80 00:31:43.632 clat percentiles (usec): 00:31:43.632 | 1.00th=[22152], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:43.632 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.632 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:43.632 | 99.00th=[34866], 99.50th=[34866], 99.90th=[40633], 99.95th=[40633], 00:31:43.632 | 99.99th=[40633] 00:31:43.633 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1960.42, stdev=57.84, samples=19 00:31:43.633 iops : min= 480, max= 512, avg=490.11, stdev=14.46, samples=19 00:31:43.633 lat (msec) : 20=0.51%, 50=99.49% 00:31:43.633 cpu : usr=99.30%, sys=0.37%, ctx=11, majf=0, minf=34 00:31:43.633 IO depths : 1=5.8%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename1: (groupid=0, jobs=1): err= 0: pid=1316522: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10014msec) 00:31:43.633 slat (usec): min=5, max=145, avg=17.88, stdev=17.92 00:31:43.633 clat (usec): min=19088, max=42694, avg=32469.86, stdev=1496.46 00:31:43.633 lat (usec): min=19117, max=42700, avg=32487.74, stdev=1495.36 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[23725], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:43.633 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.633 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:43.633 | 99.00th=[34866], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:31:43.633 | 99.99th=[42730] 00:31:43.633 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1958.40, stdev=57.02, samples=20 00:31:43.633 iops : min= 480, max= 512, avg=489.60, stdev=14.25, samples=20 00:31:43.633 lat (msec) : 20=0.12%, 50=99.88% 00:31:43.633 cpu : usr=99.03%, sys=0.61%, ctx=37, majf=0, minf=20 00:31:43.633 IO depths : 1=5.8%, 2=11.6%, 4=24.1%, 8=51.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename2: (groupid=0, jobs=1): err= 0: pid=1316523: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10005msec) 00:31:43.633 slat (nsec): min=5395, max=96873, avg=14210.10, stdev=12156.12 00:31:43.633 clat (usec): min=5540, max=54926, avg=32708.52, stdev=5374.80 00:31:43.633 lat (usec): min=5563, max=54932, avg=32722.73, stdev=5374.87 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[17433], 5.00th=[23987], 10.00th=[26346], 20.00th=[31327], 00:31:43.633 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:31:43.633 | 70.00th=[33424], 80.00th=[34341], 90.00th=[39060], 95.00th=[43254], 00:31:43.633 | 99.00th=[50070], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 00:31:43.633 | 99.99th=[54789] 00:31:43.633 bw ( KiB/s): min= 1712, max= 2112, per=4.08%, avg=1935.16, stdev=81.14, samples=19 00:31:43.633 iops : min= 428, max= 528, avg=483.79, stdev=20.29, samples=19 00:31:43.633 lat (msec) : 10=0.12%, 20=1.52%, 50=97.40%, 100=0.96% 00:31:43.633 cpu : usr=99.08%, sys=0.60%, ctx=14, majf=0, minf=21 00:31:43.633 IO depths : 1=0.6%, 2=1.5%, 4=9.4%, 8=74.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=90.7%, 8=6.2%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=4882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename2: (groupid=0, jobs=1): err= 0: pid=1316524: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10012msec) 00:31:43.633 slat (usec): min=5, max=157, avg=18.27, stdev=18.48 00:31:43.633 clat (usec): min=18688, max=54723, avg=32594.07, stdev=2307.34 00:31:43.633 lat (usec): min=18709, max=54755, avg=32612.34, stdev=2306.76 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[23987], 5.00th=[31065], 10.00th=[31589], 20.00th=[32113], 00:31:43.633 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.633 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:43.633 | 99.00th=[41157], 99.50th=[44827], 99.90th=[54789], 99.95th=[54789], 00:31:43.633 | 99.99th=[54789] 00:31:43.633 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1950.40, stdev=66.66, samples=20 00:31:43.633 iops : min= 448, max= 512, avg=487.60, stdev=16.67, samples=20 00:31:43.633 lat (msec) : 20=0.39%, 50=99.49%, 100=0.12% 00:31:43.633 cpu : usr=99.15%, sys=0.53%, ctx=21, majf=0, minf=23 00:31:43.633 IO depths : 1=5.8%, 2=11.5%, 4=23.6%, 8=52.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename2: (groupid=0, jobs=1): err= 0: pid=1316525: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10009msec) 00:31:43.633 slat (nsec): min=5424, max=74818, avg=14997.38, stdev=11626.02 00:31:43.633 clat (usec): min=14900, max=50690, avg=31932.73, stdev=3342.87 00:31:43.633 lat (usec): min=14934, max=50708, avg=31947.73, stdev=3344.05 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[20055], 5.00th=[23725], 10.00th=[28443], 20.00th=[31851], 00:31:43.633 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.633 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:43.633 | 99.00th=[42730], 99.50th=[44827], 99.90th=[50070], 99.95th=[50594], 00:31:43.633 | 99.99th=[50594] 00:31:43.633 bw ( KiB/s): min= 1792, max= 2288, per=4.21%, avg=1996.63, stdev=115.70, samples=19 00:31:43.633 iops : min= 448, max= 572, avg=499.16, stdev=28.92, samples=19 00:31:43.633 lat (msec) : 20=0.92%, 50=99.00%, 100=0.08% 00:31:43.633 cpu : usr=99.22%, sys=0.47%, ctx=14, majf=0, minf=23 00:31:43.633 IO depths : 1=4.2%, 2=8.6%, 4=18.4%, 8=59.6%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=92.5%, 8=2.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename2: (groupid=0, jobs=1): err= 0: pid=1316526: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=496, BW=1988KiB/s (2036kB/s)(19.4MiB/10005msec) 00:31:43.633 slat (nsec): min=5400, max=97821, avg=15686.85, stdev=14017.43 00:31:43.633 clat (usec): min=5584, max=59421, avg=32094.36, stdev=4528.45 00:31:43.633 lat (usec): min=5590, max=59428, avg=32110.04, stdev=4529.38 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[18744], 5.00th=[22414], 10.00th=[27919], 20.00th=[31851], 00:31:43.633 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.633 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:31:43.633 | 99.00th=[47449], 99.50th=[51119], 99.90th=[59507], 99.95th=[59507], 00:31:43.633 | 99.99th=[59507] 00:31:43.633 bw ( KiB/s): min= 1792, max= 2208, per=4.15%, avg=1969.68, stdev=96.58, samples=19 00:31:43.633 iops : min= 448, max= 552, avg=492.42, stdev=24.14, samples=19 00:31:43.633 lat (msec) : 10=0.32%, 20=1.55%, 50=97.49%, 100=0.64% 00:31:43.633 cpu : usr=99.07%, sys=0.60%, ctx=13, majf=0, minf=21 00:31:43.633 IO depths : 1=1.7%, 2=3.7%, 4=9.4%, 8=71.2%, 16=14.0%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=90.9%, 8=6.4%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename2: (groupid=0, jobs=1): err= 0: pid=1316527: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10003msec) 00:31:43.633 slat (nsec): min=5407, max=57853, avg=7936.59, stdev=3845.89 00:31:43.633 clat (usec): min=14252, max=43860, avg=30173.97, stdev=4647.79 00:31:43.633 lat (usec): min=14261, max=43868, avg=30181.91, stdev=4648.17 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[19006], 5.00th=[20317], 10.00th=[21365], 20.00th=[25035], 00:31:43.633 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:43.633 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:43.633 | 99.00th=[34341], 99.50th=[34866], 99.90th=[40633], 99.95th=[43779], 00:31:43.633 | 99.99th=[43779] 00:31:43.633 bw ( KiB/s): min= 1920, max= 2688, per=4.47%, avg=2120.42, stdev=195.48, samples=19 00:31:43.633 iops : min= 480, max= 672, avg=530.11, stdev=48.87, samples=19 00:31:43.633 lat (msec) : 20=3.85%, 50=96.15% 00:31:43.633 cpu : usr=99.03%, sys=0.64%, ctx=15, majf=0, minf=32 00:31:43.633 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=5292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename2: (groupid=0, jobs=1): err= 0: pid=1316528: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=500, BW=2004KiB/s (2052kB/s)(19.6MiB/10018msec) 00:31:43.633 slat (nsec): min=5411, max=78910, avg=12600.62, stdev=10342.67 00:31:43.633 clat (usec): min=14797, max=50387, avg=31840.13, stdev=3792.33 00:31:43.633 lat (usec): min=14803, max=50393, avg=31852.73, stdev=3793.50 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[19268], 5.00th=[22938], 10.00th=[27657], 20.00th=[31851], 00:31:43.633 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:43.633 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:43.633 | 99.00th=[45351], 99.50th=[46400], 99.90th=[50070], 99.95th=[50594], 00:31:43.633 | 99.99th=[50594] 00:31:43.633 bw ( KiB/s): min= 1920, max= 2224, per=4.22%, avg=2000.80, stdev=90.58, samples=20 00:31:43.633 iops : min= 480, max= 556, avg=500.20, stdev=22.65, samples=20 00:31:43.633 lat (msec) : 20=2.13%, 50=97.75%, 100=0.12% 00:31:43.633 cpu : usr=99.18%, sys=0.49%, ctx=14, majf=0, minf=26 00:31:43.633 IO depths : 1=3.9%, 2=7.9%, 4=18.4%, 8=61.1%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:43.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.633 issued rwts: total=5018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.633 filename2: (groupid=0, jobs=1): err= 0: pid=1316529: Tue Jul 16 00:42:55 2024 00:31:43.633 read: IOPS=506, BW=2025KiB/s (2073kB/s)(19.8MiB/10009msec) 00:31:43.633 slat (nsec): min=5480, max=98017, avg=16853.09, stdev=12710.19 00:31:43.633 clat (usec): min=15721, max=53325, avg=31479.12, stdev=4378.43 00:31:43.633 lat (usec): min=15730, max=53343, avg=31495.97, stdev=4380.81 00:31:43.633 clat percentiles (usec): 00:31:43.633 | 1.00th=[20055], 5.00th=[21627], 10.00th=[23987], 20.00th=[31589], 00:31:43.633 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:31:43.633 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:31:43.633 | 99.00th=[45876], 99.50th=[46924], 99.90th=[50594], 99.95th=[53216], 00:31:43.633 | 99.99th=[53216] 00:31:43.633 bw ( KiB/s): min= 1920, max= 2336, per=4.27%, avg=2025.42, stdev=142.10, samples=19 00:31:43.633 iops : min= 480, max= 584, avg=506.32, stdev=35.54, samples=19 00:31:43.634 lat (msec) : 20=0.87%, 50=98.95%, 100=0.18% 00:31:43.634 cpu : usr=99.29%, sys=0.39%, ctx=13, majf=0, minf=23 00:31:43.634 IO depths : 1=4.1%, 2=8.4%, 4=18.7%, 8=59.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:43.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.634 complete : 0=0.0%, 4=92.4%, 8=2.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.634 issued rwts: total=5066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.634 filename2: (groupid=0, jobs=1): err= 0: pid=1316530: Tue Jul 16 00:42:55 2024 00:31:43.634 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:31:43.634 slat (usec): min=5, max=100, avg=18.17, stdev=14.31 00:31:43.634 clat (usec): min=13640, max=60391, avg=32565.27, stdev=4783.75 00:31:43.634 lat (usec): min=13657, max=60441, avg=32583.44, stdev=4784.40 00:31:43.634 clat percentiles (usec): 00:31:43.634 | 1.00th=[19792], 5.00th=[24249], 10.00th=[27919], 20.00th=[31851], 00:31:43.634 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:31:43.634 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[40109], 00:31:43.634 | 99.00th=[52167], 99.50th=[53740], 99.90th=[60556], 99.95th=[60556], 00:31:43.634 | 99.99th=[60556] 00:31:43.634 bw ( KiB/s): min= 1808, max= 2160, per=4.12%, avg=1953.35, stdev=84.83, samples=20 00:31:43.634 iops : min= 452, max= 540, avg=488.15, stdev=21.26, samples=20 00:31:43.634 lat (msec) : 20=1.12%, 50=97.67%, 100=1.21% 00:31:43.634 cpu : usr=99.06%, sys=0.62%, ctx=10, majf=0, minf=23 00:31:43.634 IO depths : 1=3.7%, 2=7.8%, 4=18.1%, 8=60.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:31:43.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.634 complete : 0=0.0%, 4=92.3%, 8=2.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.634 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.634 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:43.634 00:31:43.634 Run status group 0 (all jobs): 00:31:43.634 READ: bw=46.3MiB/s (48.5MB/s), 1939KiB/s-2116KiB/s (1985kB/s-2167kB/s), io=464MiB (487MB), run=10001-10028msec 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 bdev_null0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 [2024-07-16 00:42:55.893382] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 bdev_null1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.634 { 00:31:43.634 "params": { 00:31:43.634 "name": "Nvme$subsystem", 00:31:43.634 "trtype": "$TEST_TRANSPORT", 00:31:43.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.634 "adrfam": "ipv4", 00:31:43.634 "trsvcid": "$NVMF_PORT", 00:31:43.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.634 "hdgst": ${hdgst:-false}, 00:31:43.634 "ddgst": ${ddgst:-false} 00:31:43.634 }, 00:31:43.634 "method": "bdev_nvme_attach_controller" 00:31:43.634 } 00:31:43.634 EOF 00:31:43.634 )") 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:43.634 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.635 { 00:31:43.635 "params": { 00:31:43.635 "name": "Nvme$subsystem", 00:31:43.635 "trtype": "$TEST_TRANSPORT", 00:31:43.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.635 "adrfam": "ipv4", 00:31:43.635 "trsvcid": "$NVMF_PORT", 00:31:43.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.635 "hdgst": ${hdgst:-false}, 00:31:43.635 "ddgst": ${ddgst:-false} 00:31:43.635 }, 00:31:43.635 "method": "bdev_nvme_attach_controller" 00:31:43.635 } 00:31:43.635 EOF 00:31:43.635 )") 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:43.635 "params": { 00:31:43.635 "name": "Nvme0", 00:31:43.635 "trtype": "tcp", 00:31:43.635 "traddr": "10.0.0.2", 00:31:43.635 "adrfam": "ipv4", 00:31:43.635 "trsvcid": "4420", 00:31:43.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:43.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:43.635 "hdgst": false, 00:31:43.635 "ddgst": false 00:31:43.635 }, 00:31:43.635 "method": "bdev_nvme_attach_controller" 00:31:43.635 },{ 00:31:43.635 "params": { 00:31:43.635 "name": "Nvme1", 00:31:43.635 "trtype": "tcp", 00:31:43.635 "traddr": "10.0.0.2", 00:31:43.635 "adrfam": "ipv4", 00:31:43.635 "trsvcid": "4420", 00:31:43.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:43.635 "hdgst": false, 00:31:43.635 "ddgst": false 00:31:43.635 }, 00:31:43.635 "method": "bdev_nvme_attach_controller" 00:31:43.635 }' 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:43.635 00:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:43.635 00:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:43.635 00:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:43.635 00:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:43.635 00:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.635 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:43.635 ... 00:31:43.635 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:43.635 ... 00:31:43.635 fio-3.35 00:31:43.635 Starting 4 threads 00:31:43.635 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.929 00:31:48.929 filename0: (groupid=0, jobs=1): err= 0: pid=1318757: Tue Jul 16 00:43:02 2024 00:31:48.929 read: IOPS=1943, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5002msec) 00:31:48.929 slat (nsec): min=5385, max=57351, avg=7148.33, stdev=2523.07 00:31:48.929 clat (usec): min=2275, max=7233, avg=4096.84, stdev=677.01 00:31:48.929 lat (usec): min=2294, max=7239, avg=4103.99, stdev=677.07 00:31:48.929 clat percentiles (usec): 00:31:48.929 | 1.00th=[ 2933], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3654], 00:31:48.929 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:31:48.929 | 70.00th=[ 4113], 80.00th=[ 4490], 90.00th=[ 5080], 95.00th=[ 5604], 00:31:48.929 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 6783], 99.95th=[ 7111], 00:31:48.929 | 99.99th=[ 7242] 00:31:48.929 bw ( KiB/s): min=14880, max=16160, per=24.29%, avg=15454.22, stdev=462.51, samples=9 00:31:48.929 iops : min= 1860, max= 2020, avg=1931.78, stdev=57.81, samples=9 00:31:48.929 lat (msec) : 4=60.87%, 10=39.13% 00:31:48.929 cpu : usr=96.66%, sys=3.08%, ctx=7, majf=0, minf=0 00:31:48.929 IO depths : 1=0.3%, 2=1.1%, 4=71.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 issued rwts: total=9721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.929 filename0: (groupid=0, jobs=1): err= 0: pid=1318759: Tue Jul 16 00:43:02 2024 00:31:48.929 read: IOPS=1875, BW=14.6MiB/s (15.4MB/s)(73.9MiB/5042msec) 00:31:48.929 slat (nsec): min=5387, max=34414, avg=7065.68, stdev=2225.83 00:31:48.929 clat (usec): min=1822, max=42036, avg=4224.27, stdev=1013.22 00:31:48.929 lat (usec): min=1830, max=42042, avg=4231.34, stdev=1013.21 00:31:48.929 clat percentiles (usec): 00:31:48.929 | 1.00th=[ 2999], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3720], 00:31:48.929 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 4015], 00:31:48.929 | 70.00th=[ 4293], 80.00th=[ 4686], 90.00th=[ 5538], 95.00th=[ 5932], 00:31:48.929 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 8979], 00:31:48.929 | 99.99th=[42206] 00:31:48.929 bw ( KiB/s): min=14768, max=15760, per=23.75%, avg=15111.11, stdev=314.16, samples=9 00:31:48.929 iops : min= 1846, max= 1970, avg=1888.89, stdev=39.27, samples=9 00:31:48.929 lat (msec) : 2=0.04%, 4=58.04%, 10=41.88%, 50=0.03% 00:31:48.929 cpu : usr=96.57%, sys=3.17%, ctx=9, majf=0, minf=9 00:31:48.929 IO depths : 1=0.3%, 2=1.4%, 4=71.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 issued rwts: total=9455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.929 filename1: (groupid=0, jobs=1): err= 0: pid=1318760: Tue Jul 16 00:43:02 2024 00:31:48.929 read: IOPS=2243, BW=17.5MiB/s (18.4MB/s)(87.7MiB/5002msec) 00:31:48.929 slat (nsec): min=5388, max=23975, avg=5852.69, stdev=1171.94 00:31:48.929 clat (usec): min=735, max=8061, avg=3548.43, stdev=576.71 00:31:48.929 lat (usec): min=741, max=8084, avg=3554.28, stdev=576.73 00:31:48.929 clat percentiles (usec): 00:31:48.929 | 1.00th=[ 2180], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 3064], 00:31:48.929 | 30.00th=[ 3261], 40.00th=[ 3392], 50.00th=[ 3589], 60.00th=[ 3720], 00:31:48.929 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4015], 95.00th=[ 4424], 00:31:48.929 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 6390], 99.95th=[ 6652], 00:31:48.929 | 99.99th=[ 6652] 00:31:48.929 bw ( KiB/s): min=16880, max=19024, per=28.41%, avg=18080.00, stdev=766.04, samples=9 00:31:48.929 iops : min= 2110, max= 2378, avg=2260.00, stdev=95.75, samples=9 00:31:48.929 lat (usec) : 750=0.01%, 1000=0.01% 00:31:48.929 lat (msec) : 2=0.62%, 4=87.11%, 10=12.24% 00:31:48.929 cpu : usr=96.20%, sys=3.44%, ctx=13, majf=0, minf=9 00:31:48.929 IO depths : 1=0.2%, 2=6.9%, 4=65.0%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 issued rwts: total=11221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.929 filename1: (groupid=0, jobs=1): err= 0: pid=1318761: Tue Jul 16 00:43:02 2024 00:31:48.929 read: IOPS=1940, BW=15.2MiB/s (15.9MB/s)(75.8MiB/5002msec) 00:31:48.929 slat (nsec): min=5386, max=57723, avg=6000.20, stdev=1779.48 00:31:48.929 clat (usec): min=2045, max=7907, avg=4106.29, stdev=696.09 00:31:48.929 lat (usec): min=2066, max=7914, avg=4112.30, stdev=696.05 00:31:48.929 clat percentiles (usec): 00:31:48.929 | 1.00th=[ 2933], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3687], 00:31:48.929 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:31:48.929 | 70.00th=[ 4080], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 5735], 00:31:48.929 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 6849], 99.95th=[ 7439], 00:31:48.929 | 99.99th=[ 7898] 00:31:48.929 bw ( KiB/s): min=15040, max=15791, per=24.34%, avg=15487.89, stdev=293.16, samples=9 00:31:48.929 iops : min= 1880, max= 1973, avg=1935.89, stdev=36.53, samples=9 00:31:48.929 lat (msec) : 4=63.17%, 10=36.83% 00:31:48.929 cpu : usr=97.08%, sys=2.66%, ctx=11, majf=0, minf=9 00:31:48.929 IO depths : 1=0.2%, 2=0.9%, 4=71.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.929 issued rwts: total=9705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.929 00:31:48.929 Run status group 0 (all jobs): 00:31:48.929 READ: bw=62.1MiB/s (65.2MB/s), 14.6MiB/s-17.5MiB/s (15.4MB/s-18.4MB/s), io=313MiB (329MB), run=5002-5042msec 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.929 00:31:48.929 real 0m24.499s 00:31:48.929 user 5m23.344s 00:31:48.929 sys 0m3.623s 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.929 00:43:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 ************************************ 00:31:48.929 END TEST fio_dif_rand_params 00:31:48.929 ************************************ 00:31:48.929 00:43:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:48.929 00:43:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:48.929 00:43:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:48.929 00:43:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.929 00:43:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 ************************************ 00:31:48.929 START TEST fio_dif_digest 00:31:48.929 ************************************ 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:48.929 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.930 bdev_null0 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.930 [2024-07-16 00:43:02.385660] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:48.930 { 00:31:48.930 "params": { 00:31:48.930 "name": "Nvme$subsystem", 00:31:48.930 "trtype": "$TEST_TRANSPORT", 00:31:48.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:48.930 "adrfam": "ipv4", 00:31:48.930 "trsvcid": "$NVMF_PORT", 00:31:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:48.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:48.930 "hdgst": ${hdgst:-false}, 00:31:48.930 "ddgst": ${ddgst:-false} 00:31:48.930 }, 00:31:48.930 "method": "bdev_nvme_attach_controller" 00:31:48.930 } 00:31:48.930 EOF 00:31:48.930 )") 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:48.930 "params": { 00:31:48.930 "name": "Nvme0", 00:31:48.930 "trtype": "tcp", 00:31:48.930 "traddr": "10.0.0.2", 00:31:48.930 "adrfam": "ipv4", 00:31:48.930 "trsvcid": "4420", 00:31:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:48.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:48.930 "hdgst": true, 00:31:48.930 "ddgst": true 00:31:48.930 }, 00:31:48.930 "method": "bdev_nvme_attach_controller" 00:31:48.930 }' 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:48.930 00:43:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.190 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:49.190 ... 00:31:49.190 fio-3.35 00:31:49.190 Starting 3 threads 00:31:49.450 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.674 00:32:01.674 filename0: (groupid=0, jobs=1): err= 0: pid=1320233: Tue Jul 16 00:43:13 2024 00:32:01.674 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(270MiB/10046msec) 00:32:01.674 slat (nsec): min=5599, max=39442, avg=7176.22, stdev=1530.96 00:32:01.674 clat (usec): min=7879, max=56996, avg=13936.71, stdev=4476.18 00:32:01.674 lat (usec): min=7888, max=57005, avg=13943.89, stdev=4476.24 00:32:01.674 clat percentiles (usec): 00:32:01.674 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[11469], 20.00th=[12387], 00:32:01.674 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[14091], 00:32:01.674 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15401], 95.00th=[15926], 00:32:01.674 | 99.00th=[51643], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:32:01.674 | 99.99th=[56886] 00:32:01.674 bw ( KiB/s): min=23552, max=29696, per=34.41%, avg=27596.80, stdev=1689.15, samples=20 00:32:01.674 iops : min= 184, max= 232, avg=215.60, stdev=13.20, samples=20 00:32:01.674 lat (msec) : 10=3.29%, 20=95.64%, 100=1.07% 00:32:01.674 cpu : usr=95.09%, sys=4.62%, ctx=28, majf=0, minf=67 00:32:01.674 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.674 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:01.674 filename0: (groupid=0, jobs=1): err= 0: pid=1320234: Tue Jul 16 00:43:13 2024 00:32:01.674 read: IOPS=209, BW=26.2MiB/s (27.4MB/s)(263MiB/10045msec) 00:32:01.674 slat (nsec): min=5681, max=67787, avg=7250.63, stdev=2028.26 00:32:01.674 clat (msec): min=7, max=136, avg=14.31, stdev= 5.75 00:32:01.674 lat (msec): min=7, max=136, avg=14.31, stdev= 5.75 00:32:01.674 clat percentiles (msec): 00:32:01.674 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:32:01.674 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:32:01.674 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 16], 95.00th=[ 17], 00:32:01.674 | 99.00th=[ 55], 99.50th=[ 56], 99.90th=[ 94], 99.95th=[ 95], 00:32:01.674 | 99.99th=[ 138] 00:32:01.674 bw ( KiB/s): min=20736, max=29440, per=33.51%, avg=26877.25, stdev=1918.98, samples=20 00:32:01.674 iops : min= 162, max= 230, avg=209.95, stdev=14.99, samples=20 00:32:01.674 lat (msec) : 10=2.71%, 20=96.10%, 50=0.05%, 100=1.09%, 250=0.05% 00:32:01.674 cpu : usr=95.75%, sys=3.98%, ctx=25, majf=0, minf=228 00:32:01.674 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.674 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:01.674 filename0: (groupid=0, jobs=1): err= 0: pid=1320235: Tue Jul 16 00:43:13 2024 00:32:01.674 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(254MiB/10044msec) 00:32:01.674 slat (nsec): min=5632, max=31848, avg=7045.36, stdev=1384.87 00:32:01.674 clat (usec): min=8345, max=58280, avg=14777.66, stdev=5605.29 00:32:01.674 lat (usec): min=8352, max=58286, avg=14784.71, stdev=5605.45 00:32:01.674 clat percentiles (usec): 00:32:01.674 | 1.00th=[ 9372], 5.00th=[10814], 10.00th=[11863], 20.00th=[12911], 00:32:01.674 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:32:01.674 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16057], 95.00th=[16581], 00:32:01.674 | 99.00th=[55313], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:32:01.674 | 99.99th=[58459] 00:32:01.674 bw ( KiB/s): min=23296, max=30976, per=32.44%, avg=26019.75, stdev=1724.35, samples=20 00:32:01.674 iops : min= 182, max= 242, avg=203.25, stdev=13.47, samples=20 00:32:01.674 lat (msec) : 10=2.46%, 20=95.82%, 100=1.72% 00:32:01.674 cpu : usr=95.37%, sys=4.35%, ctx=21, majf=0, minf=150 00:32:01.674 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.674 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:01.674 00:32:01.674 Run status group 0 (all jobs): 00:32:01.675 READ: bw=78.3MiB/s (82.1MB/s), 25.3MiB/s-26.9MiB/s (26.6MB/s-28.2MB/s), io=787MiB (825MB), run=10044-10046msec 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.675 00:32:01.675 real 0m11.195s 00:32:01.675 user 0m42.621s 00:32:01.675 sys 0m1.609s 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:01.675 00:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.675 ************************************ 00:32:01.675 END TEST fio_dif_digest 00:32:01.675 ************************************ 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:01.675 00:43:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:01.675 00:43:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:01.675 rmmod nvme_tcp 00:32:01.675 rmmod nvme_fabrics 00:32:01.675 rmmod nvme_keyring 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1309790 ']' 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1309790 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1309790 ']' 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1309790 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1309790 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1309790' 00:32:01.675 killing process with pid 1309790 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1309790 00:32:01.675 00:43:13 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1309790 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:01.675 00:43:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:04.223 Waiting for block devices as requested 00:32:04.223 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:04.223 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:04.223 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:04.223 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:04.484 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:04.484 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:04.484 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:04.745 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:04.745 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:04.745 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:05.006 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:05.006 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:05.006 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:05.006 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:05.268 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:05.268 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:05.268 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:05.268 00:43:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:05.268 00:43:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:05.268 00:43:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:05.268 00:43:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:05.268 00:43:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.268 00:43:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:05.268 00:43:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.815 00:43:20 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:07.815 00:32:07.815 real 1m18.803s 00:32:07.815 user 8m3.502s 00:32:07.815 sys 0m20.478s 00:32:07.815 00:43:20 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:07.815 00:43:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:07.815 ************************************ 00:32:07.815 END TEST nvmf_dif 00:32:07.815 ************************************ 00:32:07.815 00:43:20 -- common/autotest_common.sh@1142 -- # return 0 00:32:07.815 00:43:20 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:07.815 00:43:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:07.815 00:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.815 00:43:20 -- common/autotest_common.sh@10 -- # set +x 00:32:07.815 ************************************ 00:32:07.815 START TEST nvmf_abort_qd_sizes 00:32:07.815 ************************************ 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:07.815 * Looking for test storage... 00:32:07.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.815 00:43:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:07.816 00:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:15.958 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:15.958 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:15.958 Found net devices under 0000:31:00.0: cvl_0_0 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:15.958 Found net devices under 0000:31:00.1: cvl_0_1 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.958 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.959 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:15.959 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.959 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.959 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:15.959 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:15.959 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.959 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:15.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:32:15.959 00:32:15.959 --- 10.0.0.2 ping statistics --- 00:32:15.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.959 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:32:15.959 00:32:15.959 --- 10.0.0.1 ping statistics --- 00:32:15.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.959 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:15.959 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:20.202 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:20.202 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1330420 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1330420 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1330420 ']' 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:20.202 00:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.202 [2024-07-16 00:43:33.367484] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:32:20.203 [2024-07-16 00:43:33.367591] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.203 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.203 [2024-07-16 00:43:33.452377] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:20.203 [2024-07-16 00:43:33.528712] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.203 [2024-07-16 00:43:33.528752] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.203 [2024-07-16 00:43:33.528760] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.203 [2024-07-16 00:43:33.528766] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.203 [2024-07-16 00:43:33.528772] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.203 [2024-07-16 00:43:33.528913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.203 [2024-07-16 00:43:33.529026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.203 [2024-07-16 00:43:33.529181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.203 [2024-07-16 00:43:33.529183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.785 00:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.785 ************************************ 00:32:20.785 START TEST spdk_target_abort 00:32:20.785 ************************************ 00:32:20.785 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:20.785 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:20.785 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:20.785 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.785 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.044 spdk_targetn1 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.044 [2024-07-16 00:43:34.557269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.044 [2024-07-16 00:43:34.597533] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:21.044 00:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.044 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.304 [2024-07-16 00:43:34.783743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:760 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:21.304 [2024-07-16 00:43:34.783769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0063 p:1 m:0 dnr:0 00:32:24.602 Initializing NVMe Controllers 00:32:24.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:24.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:24.602 Initialization complete. Launching workers. 00:32:24.602 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12314, failed: 1 00:32:24.602 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3338, failed to submit 8977 00:32:24.602 success 787, unsuccess 2551, failed 0 00:32:24.602 00:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:24.602 00:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:24.602 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.602 [2024-07-16 00:43:37.946203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:504 len:8 PRP1 0x200007c40000 PRP2 0x0 00:32:24.602 [2024-07-16 00:43:37.946242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:32:26.516 [2024-07-16 00:43:39.976310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:47920 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:32:26.517 [2024-07-16 00:43:39.976357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:26.517 [2024-07-16 00:43:40.095455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:50624 len:8 PRP1 0x200007c62000 PRP2 0x0 00:32:26.517 [2024-07-16 00:43:40.095487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:00bc p:1 m:0 dnr:0 00:32:27.460 Initializing NVMe Controllers 00:32:27.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:27.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:27.460 Initialization complete. Launching workers. 00:32:27.460 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8693, failed: 3 00:32:27.460 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7480 00:32:27.460 success 342, unsuccess 874, failed 0 00:32:27.720 00:43:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:27.720 00:43:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:27.720 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.268 [2024-07-16 00:43:43.514595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:181 nsid:1 lba:253256 len:8 PRP1 0x200007910000 PRP2 0x0 00:32:30.268 [2024-07-16 00:43:43.514627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:181 cdw0:0 sqhd:00c6 p:0 m:0 dnr:0 00:32:30.840 Initializing NVMe Controllers 00:32:30.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:30.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:30.840 Initialization complete. Launching workers. 00:32:30.840 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41030, failed: 1 00:32:30.840 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2657, failed to submit 38374 00:32:30.840 success 585, unsuccess 2072, failed 0 00:32:30.840 00:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:30.840 00:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.840 00:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:30.840 00:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.840 00:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:30.840 00:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.840 00:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1330420 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1330420 ']' 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1330420 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1330420 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1330420' 00:32:32.754 killing process with pid 1330420 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1330420 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1330420 00:32:32.754 00:32:32.754 real 0m12.050s 00:32:32.754 user 0m48.918s 00:32:32.754 sys 0m1.884s 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:32.754 ************************************ 00:32:32.754 END TEST spdk_target_abort 00:32:32.754 ************************************ 00:32:32.754 00:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:32.754 00:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:32.754 00:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:32.754 00:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.754 00:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:32.754 ************************************ 00:32:32.754 START TEST kernel_target_abort 00:32:32.754 ************************************ 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:32.754 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:33.016 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:33.016 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:37.226 Waiting for block devices as requested 00:32:37.226 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:37.226 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:37.486 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:37.486 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:37.745 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:37.745 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:37.745 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:37.745 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:38.006 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:38.006 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:38.006 No valid GPT data, bailing 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:38.006 00:32:38.006 Discovery Log Number of Records 2, Generation counter 2 00:32:38.006 =====Discovery Log Entry 0====== 00:32:38.006 trtype: tcp 00:32:38.006 adrfam: ipv4 00:32:38.006 subtype: current discovery subsystem 00:32:38.006 treq: not specified, sq flow control disable supported 00:32:38.006 portid: 1 00:32:38.006 trsvcid: 4420 00:32:38.006 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:38.006 traddr: 10.0.0.1 00:32:38.006 eflags: none 00:32:38.006 sectype: none 00:32:38.006 =====Discovery Log Entry 1====== 00:32:38.006 trtype: tcp 00:32:38.006 adrfam: ipv4 00:32:38.006 subtype: nvme subsystem 00:32:38.006 treq: not specified, sq flow control disable supported 00:32:38.006 portid: 1 00:32:38.006 trsvcid: 4420 00:32:38.006 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:38.006 traddr: 10.0.0.1 00:32:38.006 eflags: none 00:32:38.006 sectype: none 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:38.006 00:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.006 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.301 Initializing NVMe Controllers 00:32:41.301 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:41.301 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:41.301 Initialization complete. Launching workers. 00:32:41.301 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55861, failed: 0 00:32:41.301 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55861, failed to submit 0 00:32:41.301 success 0, unsuccess 55861, failed 0 00:32:41.301 00:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:41.301 00:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:41.301 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.603 Initializing NVMe Controllers 00:32:44.603 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:44.603 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:44.603 Initialization complete. Launching workers. 00:32:44.603 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98213, failed: 0 00:32:44.603 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24762, failed to submit 73451 00:32:44.603 success 0, unsuccess 24762, failed 0 00:32:44.603 00:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:44.603 00:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:44.603 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.906 Initializing NVMe Controllers 00:32:47.906 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:47.906 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:47.906 Initialization complete. Launching workers. 00:32:47.906 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94133, failed: 0 00:32:47.906 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23538, failed to submit 70595 00:32:47.906 success 0, unsuccess 23538, failed 0 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:47.906 00:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:51.209 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:51.209 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:53.120 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:53.120 00:32:53.120 real 0m20.290s 00:32:53.120 user 0m9.038s 00:32:53.120 sys 0m6.485s 00:32:53.120 00:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:53.120 00:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:53.120 ************************************ 00:32:53.120 END TEST kernel_target_abort 00:32:53.120 ************************************ 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:53.120 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:53.120 rmmod nvme_tcp 00:32:53.120 rmmod nvme_fabrics 00:32:53.381 rmmod nvme_keyring 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1330420 ']' 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1330420 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1330420 ']' 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1330420 00:32:53.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1330420) - No such process 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1330420 is not found' 00:32:53.381 Process with pid 1330420 is not found 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:53.381 00:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:57.586 Waiting for block devices as requested 00:32:57.586 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:57.586 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:57.586 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:57.586 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:57.586 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:57.586 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:57.586 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:57.586 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:57.846 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:57.846 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:57.846 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:58.106 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:58.106 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:58.106 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:58.106 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:58.365 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:58.365 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:58.365 00:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:58.365 00:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:58.365 00:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:58.365 00:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:58.365 00:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.365 00:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:58.365 00:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.904 00:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:00.904 00:33:00.904 real 0m52.963s 00:33:00.904 user 1m3.614s 00:33:00.904 sys 0m19.964s 00:33:00.904 00:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:00.904 00:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:00.904 ************************************ 00:33:00.904 END TEST nvmf_abort_qd_sizes 00:33:00.904 ************************************ 00:33:00.904 00:44:14 -- common/autotest_common.sh@1142 -- # return 0 00:33:00.904 00:44:14 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:00.904 00:44:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:00.904 00:44:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:00.904 00:44:14 -- common/autotest_common.sh@10 -- # set +x 00:33:00.904 ************************************ 00:33:00.904 START TEST keyring_file 00:33:00.904 ************************************ 00:33:00.904 00:44:14 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:00.904 * Looking for test storage... 00:33:00.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:00.904 00:44:14 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:00.904 00:44:14 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:00.904 00:44:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.905 00:44:14 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.905 00:44:14 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.905 00:44:14 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.905 00:44:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.905 00:44:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.905 00:44:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.905 00:44:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:00.905 00:44:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6rahqGoVVO 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6rahqGoVVO 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6rahqGoVVO 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6rahqGoVVO 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yJ0wWHylBS 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:00.905 00:44:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yJ0wWHylBS 00:33:00.905 00:44:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yJ0wWHylBS 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yJ0wWHylBS 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=1340980 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1340980 00:33:00.905 00:44:14 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:00.905 00:44:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1340980 ']' 00:33:00.905 00:44:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.905 00:44:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:00.905 00:44:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.905 00:44:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:00.905 00:44:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:00.905 [2024-07-16 00:44:14.353005] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:33:00.905 [2024-07-16 00:44:14.353066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340980 ] 00:33:00.905 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.905 [2024-07-16 00:44:14.426207] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.905 [2024-07-16 00:44:14.494011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:01.846 00:44:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.846 [2024-07-16 00:44:15.129288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.846 null0 00:33:01.846 [2024-07-16 00:44:15.161318] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:01.846 [2024-07-16 00:44:15.161540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:01.846 [2024-07-16 00:44:15.169317] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.846 00:44:15 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.846 [2024-07-16 00:44:15.185363] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:01.846 request: 00:33:01.846 { 00:33:01.846 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.846 "secure_channel": false, 00:33:01.846 "listen_address": { 00:33:01.846 "trtype": "tcp", 00:33:01.846 "traddr": "127.0.0.1", 00:33:01.846 "trsvcid": "4420" 00:33:01.846 }, 00:33:01.846 "method": "nvmf_subsystem_add_listener", 00:33:01.846 "req_id": 1 00:33:01.846 } 00:33:01.846 Got JSON-RPC error response 00:33:01.846 response: 00:33:01.846 { 00:33:01.846 "code": -32602, 00:33:01.846 "message": "Invalid parameters" 00:33:01.846 } 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.846 00:44:15 keyring_file -- keyring/file.sh@46 -- # bperfpid=1341150 00:33:01.846 00:44:15 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1341150 /var/tmp/bperf.sock 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1341150 ']' 00:33:01.846 00:44:15 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:01.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:01.846 00:44:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.846 [2024-07-16 00:44:15.252056] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:33:01.846 [2024-07-16 00:44:15.252112] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341150 ] 00:33:01.846 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.846 [2024-07-16 00:44:15.334543] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.846 [2024-07-16 00:44:15.398354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.417 00:44:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:02.417 00:44:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:02.417 00:44:15 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:02.417 00:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:02.677 00:44:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yJ0wWHylBS 00:33:02.677 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yJ0wWHylBS 00:33:02.677 00:44:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:02.677 00:44:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:02.678 00:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.678 00:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:02.678 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.938 00:44:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.6rahqGoVVO == \/\t\m\p\/\t\m\p\.\6\r\a\h\q\G\o\V\V\O ]] 00:33:02.938 00:44:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:02.938 00:44:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:02.938 00:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.938 00:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:02.938 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.199 00:44:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yJ0wWHylBS == \/\t\m\p\/\t\m\p\.\y\J\0\w\W\H\y\l\B\S ]] 00:33:03.199 00:44:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.199 00:44:16 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:03.199 00:44:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:03.199 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.459 00:44:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:03.459 00:44:16 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.459 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.459 [2024-07-16 00:44:17.066714] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:03.720 nvme0n1 00:33:03.720 00:44:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.720 00:44:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:03.720 00:44:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.720 00:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.981 00:44:17 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:03.981 00:44:17 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:03.981 Running I/O for 1 seconds... 00:33:05.431 00:33:05.431 Latency(us) 00:33:05.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.431 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:05.431 nvme0n1 : 1.01 10262.06 40.09 0.00 0.00 12392.16 5215.57 18786.99 00:33:05.431 =================================================================================================================== 00:33:05.431 Total : 10262.06 40.09 0.00 0.00 12392.16 5215.57 18786.99 00:33:05.431 0 00:33:05.431 00:44:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:05.431 00:44:18 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.431 00:44:18 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:05.431 00:44:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.431 00:44:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:05.691 00:44:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:05.691 00:44:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:05.691 00:44:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:05.691 00:44:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:05.691 00:44:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:05.691 00:44:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:05.691 00:44:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:05.691 00:44:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:05.691 00:44:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:05.691 00:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:05.691 [2024-07-16 00:44:19.239267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:05.691 [2024-07-16 00:44:19.239335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250e0f0 (107): Transport endpoint is not connected 00:33:05.691 [2024-07-16 00:44:19.240331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250e0f0 (9): Bad file descriptor 00:33:05.691 [2024-07-16 00:44:19.241333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:05.691 [2024-07-16 00:44:19.241340] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:05.691 [2024-07-16 00:44:19.241345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:05.691 request: 00:33:05.691 { 00:33:05.691 "name": "nvme0", 00:33:05.691 "trtype": "tcp", 00:33:05.691 "traddr": "127.0.0.1", 00:33:05.692 "adrfam": "ipv4", 00:33:05.692 "trsvcid": "4420", 00:33:05.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:05.692 "prchk_reftag": false, 00:33:05.692 "prchk_guard": false, 00:33:05.692 "hdgst": false, 00:33:05.692 "ddgst": false, 00:33:05.692 "psk": "key1", 00:33:05.692 "method": "bdev_nvme_attach_controller", 00:33:05.692 "req_id": 1 00:33:05.692 } 00:33:05.692 Got JSON-RPC error response 00:33:05.692 response: 00:33:05.692 { 00:33:05.692 "code": -5, 00:33:05.692 "message": "Input/output error" 00:33:05.692 } 00:33:05.692 00:44:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:05.692 00:44:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:05.692 00:44:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:05.692 00:44:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:05.692 00:44:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:05.692 00:44:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:05.692 00:44:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.692 00:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.692 00:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:05.692 00:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.952 00:44:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:05.952 00:44:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:05.952 00:44:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:05.952 00:44:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:05.952 00:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:05.952 00:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:05.952 00:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.952 00:44:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:05.952 00:44:19 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:05.952 00:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:06.213 00:44:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:06.213 00:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:06.473 00:44:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:06.473 00:44:19 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:06.473 00:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.473 00:44:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:06.473 00:44:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.6rahqGoVVO 00:33:06.473 00:44:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:06.473 00:44:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:06.473 00:44:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:06.473 00:44:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:06.473 00:44:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.473 00:44:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:06.473 00:44:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.473 00:44:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:06.473 00:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:06.733 [2024-07-16 00:44:20.189777] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6rahqGoVVO': 0100660 00:33:06.733 [2024-07-16 00:44:20.189802] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:06.733 request: 00:33:06.733 { 00:33:06.733 "name": "key0", 00:33:06.733 "path": "/tmp/tmp.6rahqGoVVO", 00:33:06.733 "method": "keyring_file_add_key", 00:33:06.733 "req_id": 1 00:33:06.733 } 00:33:06.733 Got JSON-RPC error response 00:33:06.733 response: 00:33:06.734 { 00:33:06.734 "code": -1, 00:33:06.734 "message": "Operation not permitted" 00:33:06.734 } 00:33:06.734 00:44:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:06.734 00:44:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:06.734 00:44:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:06.734 00:44:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:06.734 00:44:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.6rahqGoVVO 00:33:06.734 00:44:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:06.734 00:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6rahqGoVVO 00:33:06.734 00:44:20 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.6rahqGoVVO 00:33:06.994 00:44:20 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:06.994 00:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:06.994 00:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:06.994 00:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:06.994 00:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:06.994 00:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.994 00:44:20 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:06.994 00:44:20 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:06.994 00:44:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:06.994 00:44:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:06.994 00:44:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:06.994 00:44:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.994 00:44:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:06.994 00:44:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.994 00:44:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:06.994 00:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:07.255 [2024-07-16 00:44:20.666968] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6rahqGoVVO': No such file or directory 00:33:07.255 [2024-07-16 00:44:20.666983] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:07.255 [2024-07-16 00:44:20.666999] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:07.255 [2024-07-16 00:44:20.667003] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:07.255 [2024-07-16 00:44:20.667008] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:07.255 request: 00:33:07.255 { 00:33:07.255 "name": "nvme0", 00:33:07.255 "trtype": "tcp", 00:33:07.255 "traddr": "127.0.0.1", 00:33:07.255 "adrfam": "ipv4", 00:33:07.255 "trsvcid": "4420", 00:33:07.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:07.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:07.255 "prchk_reftag": false, 00:33:07.255 "prchk_guard": false, 00:33:07.255 "hdgst": false, 00:33:07.255 "ddgst": false, 00:33:07.255 "psk": "key0", 00:33:07.255 "method": "bdev_nvme_attach_controller", 00:33:07.255 "req_id": 1 00:33:07.255 } 00:33:07.255 Got JSON-RPC error response 00:33:07.255 response: 00:33:07.255 { 00:33:07.255 "code": -19, 00:33:07.255 "message": "No such device" 00:33:07.255 } 00:33:07.255 00:44:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:07.255 00:44:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:07.255 00:44:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:07.255 00:44:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:07.255 00:44:20 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:07.255 00:44:20 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sM0DjOo9sx 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:07.255 00:44:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:07.255 00:44:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:07.255 00:44:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:07.255 00:44:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:07.255 00:44:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:07.255 00:44:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sM0DjOo9sx 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sM0DjOo9sx 00:33:07.255 00:44:20 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.sM0DjOo9sx 00:33:07.255 00:44:20 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sM0DjOo9sx 00:33:07.255 00:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sM0DjOo9sx 00:33:07.516 00:44:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:07.516 00:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:07.778 nvme0n1 00:33:07.778 00:44:21 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:07.778 00:44:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:07.778 00:44:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:07.778 00:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:07.778 00:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:07.778 00:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.778 00:44:21 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:07.778 00:44:21 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:07.778 00:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:08.038 00:44:21 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:08.038 00:44:21 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:08.038 00:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.038 00:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:08.038 00:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.298 00:44:21 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:08.298 00:44:21 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:08.298 00:44:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:08.298 00:44:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:08.298 00:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.298 00:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:08.298 00:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.298 00:44:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:08.298 00:44:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:08.299 00:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:08.559 00:44:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:08.559 00:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.559 00:44:22 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:08.819 00:44:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:08.819 00:44:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sM0DjOo9sx 00:33:08.819 00:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sM0DjOo9sx 00:33:08.819 00:44:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yJ0wWHylBS 00:33:08.819 00:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yJ0wWHylBS 00:33:09.079 00:44:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:09.079 00:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:09.340 nvme0n1 00:33:09.340 00:44:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:09.340 00:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:09.601 00:44:22 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:09.601 "subsystems": [ 00:33:09.601 { 00:33:09.601 "subsystem": "keyring", 00:33:09.601 "config": [ 00:33:09.601 { 00:33:09.601 "method": "keyring_file_add_key", 00:33:09.601 "params": { 00:33:09.601 "name": "key0", 00:33:09.601 "path": "/tmp/tmp.sM0DjOo9sx" 00:33:09.601 } 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "method": "keyring_file_add_key", 00:33:09.601 "params": { 00:33:09.601 "name": "key1", 00:33:09.601 "path": "/tmp/tmp.yJ0wWHylBS" 00:33:09.601 } 00:33:09.601 } 00:33:09.601 ] 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "subsystem": "iobuf", 00:33:09.601 "config": [ 00:33:09.601 { 00:33:09.601 "method": "iobuf_set_options", 00:33:09.601 "params": { 00:33:09.601 "small_pool_count": 8192, 00:33:09.601 "large_pool_count": 1024, 00:33:09.601 "small_bufsize": 8192, 00:33:09.601 "large_bufsize": 135168 00:33:09.601 } 00:33:09.601 } 00:33:09.601 ] 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "subsystem": "sock", 00:33:09.601 "config": [ 00:33:09.601 { 00:33:09.601 "method": "sock_set_default_impl", 00:33:09.601 "params": { 00:33:09.601 "impl_name": "posix" 00:33:09.601 } 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "method": "sock_impl_set_options", 00:33:09.601 "params": { 00:33:09.601 "impl_name": "ssl", 00:33:09.601 "recv_buf_size": 4096, 00:33:09.601 "send_buf_size": 4096, 00:33:09.601 "enable_recv_pipe": true, 00:33:09.601 "enable_quickack": false, 00:33:09.601 "enable_placement_id": 0, 00:33:09.601 "enable_zerocopy_send_server": true, 00:33:09.601 "enable_zerocopy_send_client": false, 00:33:09.601 "zerocopy_threshold": 0, 00:33:09.601 "tls_version": 0, 00:33:09.601 "enable_ktls": false 00:33:09.601 } 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "method": "sock_impl_set_options", 00:33:09.601 "params": { 00:33:09.601 "impl_name": "posix", 00:33:09.601 "recv_buf_size": 2097152, 00:33:09.601 "send_buf_size": 2097152, 00:33:09.601 "enable_recv_pipe": true, 00:33:09.601 "enable_quickack": false, 00:33:09.601 "enable_placement_id": 0, 00:33:09.601 "enable_zerocopy_send_server": true, 00:33:09.601 "enable_zerocopy_send_client": false, 00:33:09.601 "zerocopy_threshold": 0, 00:33:09.601 "tls_version": 0, 00:33:09.601 "enable_ktls": false 00:33:09.601 } 00:33:09.601 } 00:33:09.601 ] 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "subsystem": "vmd", 00:33:09.601 "config": [] 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "subsystem": "accel", 00:33:09.601 "config": [ 00:33:09.601 { 00:33:09.601 "method": "accel_set_options", 00:33:09.601 "params": { 00:33:09.601 "small_cache_size": 128, 00:33:09.601 "large_cache_size": 16, 00:33:09.601 "task_count": 2048, 00:33:09.601 "sequence_count": 2048, 00:33:09.601 "buf_count": 2048 00:33:09.601 } 00:33:09.601 } 00:33:09.601 ] 00:33:09.601 }, 00:33:09.601 { 00:33:09.601 "subsystem": "bdev", 00:33:09.601 "config": [ 00:33:09.601 { 00:33:09.601 "method": "bdev_set_options", 00:33:09.601 "params": { 00:33:09.601 "bdev_io_pool_size": 65535, 00:33:09.601 "bdev_io_cache_size": 256, 00:33:09.601 "bdev_auto_examine": true, 00:33:09.601 "iobuf_small_cache_size": 128, 00:33:09.601 "iobuf_large_cache_size": 16 00:33:09.601 } 00:33:09.601 }, 00:33:09.602 { 00:33:09.602 "method": "bdev_raid_set_options", 00:33:09.602 "params": { 00:33:09.602 "process_window_size_kb": 1024 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "bdev_iscsi_set_options", 00:33:09.602 "params": { 00:33:09.602 "timeout_sec": 30 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "bdev_nvme_set_options", 00:33:09.602 "params": { 00:33:09.602 "action_on_timeout": "none", 00:33:09.602 "timeout_us": 0, 00:33:09.602 "timeout_admin_us": 0, 00:33:09.602 "keep_alive_timeout_ms": 10000, 00:33:09.602 "arbitration_burst": 0, 00:33:09.602 "low_priority_weight": 0, 00:33:09.602 "medium_priority_weight": 0, 00:33:09.602 "high_priority_weight": 0, 00:33:09.602 "nvme_adminq_poll_period_us": 10000, 00:33:09.602 "nvme_ioq_poll_period_us": 0, 00:33:09.602 "io_queue_requests": 512, 00:33:09.602 "delay_cmd_submit": true, 00:33:09.602 "transport_retry_count": 4, 00:33:09.602 "bdev_retry_count": 3, 00:33:09.602 "transport_ack_timeout": 0, 00:33:09.602 "ctrlr_loss_timeout_sec": 0, 00:33:09.602 "reconnect_delay_sec": 0, 00:33:09.602 "fast_io_fail_timeout_sec": 0, 00:33:09.602 "disable_auto_failback": false, 00:33:09.602 "generate_uuids": false, 00:33:09.602 "transport_tos": 0, 00:33:09.602 "nvme_error_stat": false, 00:33:09.602 "rdma_srq_size": 0, 00:33:09.602 "io_path_stat": false, 00:33:09.602 "allow_accel_sequence": false, 00:33:09.602 "rdma_max_cq_size": 0, 00:33:09.602 "rdma_cm_event_timeout_ms": 0, 00:33:09.602 "dhchap_digests": [ 00:33:09.602 "sha256", 00:33:09.602 "sha384", 00:33:09.602 "sha512" 00:33:09.602 ], 00:33:09.602 "dhchap_dhgroups": [ 00:33:09.602 "null", 00:33:09.602 "ffdhe2048", 00:33:09.602 "ffdhe3072", 00:33:09.602 "ffdhe4096", 00:33:09.602 "ffdhe6144", 00:33:09.602 "ffdhe8192" 00:33:09.602 ] 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "bdev_nvme_attach_controller", 00:33:09.602 "params": { 00:33:09.602 "name": "nvme0", 00:33:09.602 "trtype": "TCP", 00:33:09.602 "adrfam": "IPv4", 00:33:09.602 "traddr": "127.0.0.1", 00:33:09.602 "trsvcid": "4420", 00:33:09.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.602 "prchk_reftag": false, 00:33:09.602 "prchk_guard": false, 00:33:09.602 "ctrlr_loss_timeout_sec": 0, 00:33:09.602 "reconnect_delay_sec": 0, 00:33:09.602 "fast_io_fail_timeout_sec": 0, 00:33:09.602 "psk": "key0", 00:33:09.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.602 "hdgst": false, 00:33:09.602 "ddgst": false 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "bdev_nvme_set_hotplug", 00:33:09.602 "params": { 00:33:09.602 "period_us": 100000, 00:33:09.602 "enable": false 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "bdev_wait_for_examine" 00:33:09.602 } 00:33:09.602 ] 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "subsystem": "nbd", 00:33:09.602 "config": [] 00:33:09.602 } 00:33:09.602 ] 00:33:09.602 }' 00:33:09.602 00:44:22 keyring_file -- keyring/file.sh@114 -- # killprocess 1341150 00:33:09.602 00:44:22 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1341150 ']' 00:33:09.602 00:44:22 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1341150 00:33:09.602 00:44:22 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:09.602 00:44:22 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:09.602 00:44:22 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1341150 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1341150' 00:33:09.602 killing process with pid 1341150 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@967 -- # kill 1341150 00:33:09.602 Received shutdown signal, test time was about 1.000000 seconds 00:33:09.602 00:33:09.602 Latency(us) 00:33:09.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.602 =================================================================================================================== 00:33:09.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@972 -- # wait 1341150 00:33:09.602 00:44:23 keyring_file -- keyring/file.sh@117 -- # bperfpid=1342746 00:33:09.602 00:44:23 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1342746 /var/tmp/bperf.sock 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1342746 ']' 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:09.602 00:44:23 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:09.602 00:44:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.602 00:44:23 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:09.602 "subsystems": [ 00:33:09.602 { 00:33:09.602 "subsystem": "keyring", 00:33:09.602 "config": [ 00:33:09.602 { 00:33:09.602 "method": "keyring_file_add_key", 00:33:09.602 "params": { 00:33:09.602 "name": "key0", 00:33:09.602 "path": "/tmp/tmp.sM0DjOo9sx" 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "keyring_file_add_key", 00:33:09.602 "params": { 00:33:09.602 "name": "key1", 00:33:09.602 "path": "/tmp/tmp.yJ0wWHylBS" 00:33:09.602 } 00:33:09.602 } 00:33:09.602 ] 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "subsystem": "iobuf", 00:33:09.602 "config": [ 00:33:09.602 { 00:33:09.602 "method": "iobuf_set_options", 00:33:09.602 "params": { 00:33:09.602 "small_pool_count": 8192, 00:33:09.602 "large_pool_count": 1024, 00:33:09.602 "small_bufsize": 8192, 00:33:09.602 "large_bufsize": 135168 00:33:09.602 } 00:33:09.602 } 00:33:09.602 ] 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "subsystem": "sock", 00:33:09.602 "config": [ 00:33:09.602 { 00:33:09.602 "method": "sock_set_default_impl", 00:33:09.602 "params": { 00:33:09.602 "impl_name": "posix" 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "sock_impl_set_options", 00:33:09.602 "params": { 00:33:09.602 "impl_name": "ssl", 00:33:09.602 "recv_buf_size": 4096, 00:33:09.602 "send_buf_size": 4096, 00:33:09.602 "enable_recv_pipe": true, 00:33:09.602 "enable_quickack": false, 00:33:09.602 "enable_placement_id": 0, 00:33:09.602 "enable_zerocopy_send_server": true, 00:33:09.602 "enable_zerocopy_send_client": false, 00:33:09.602 "zerocopy_threshold": 0, 00:33:09.602 "tls_version": 0, 00:33:09.602 "enable_ktls": false 00:33:09.602 } 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "method": "sock_impl_set_options", 00:33:09.602 "params": { 00:33:09.602 "impl_name": "posix", 00:33:09.602 "recv_buf_size": 2097152, 00:33:09.602 "send_buf_size": 2097152, 00:33:09.602 "enable_recv_pipe": true, 00:33:09.602 "enable_quickack": false, 00:33:09.602 "enable_placement_id": 0, 00:33:09.602 "enable_zerocopy_send_server": true, 00:33:09.602 "enable_zerocopy_send_client": false, 00:33:09.602 "zerocopy_threshold": 0, 00:33:09.602 "tls_version": 0, 00:33:09.602 "enable_ktls": false 00:33:09.602 } 00:33:09.602 } 00:33:09.602 ] 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "subsystem": "vmd", 00:33:09.602 "config": [] 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "subsystem": "accel", 00:33:09.602 "config": [ 00:33:09.602 { 00:33:09.602 "method": "accel_set_options", 00:33:09.602 "params": { 00:33:09.602 "small_cache_size": 128, 00:33:09.602 "large_cache_size": 16, 00:33:09.602 "task_count": 2048, 00:33:09.602 "sequence_count": 2048, 00:33:09.602 "buf_count": 2048 00:33:09.602 } 00:33:09.602 } 00:33:09.602 ] 00:33:09.602 }, 00:33:09.602 { 00:33:09.602 "subsystem": "bdev", 00:33:09.603 "config": [ 00:33:09.603 { 00:33:09.603 "method": "bdev_set_options", 00:33:09.603 "params": { 00:33:09.603 "bdev_io_pool_size": 65535, 00:33:09.603 "bdev_io_cache_size": 256, 00:33:09.603 "bdev_auto_examine": true, 00:33:09.603 "iobuf_small_cache_size": 128, 00:33:09.603 "iobuf_large_cache_size": 16 00:33:09.603 } 00:33:09.603 }, 00:33:09.603 { 00:33:09.603 "method": "bdev_raid_set_options", 00:33:09.603 "params": { 00:33:09.603 "process_window_size_kb": 1024 00:33:09.603 } 00:33:09.603 }, 00:33:09.603 { 00:33:09.603 "method": "bdev_iscsi_set_options", 00:33:09.603 "params": { 00:33:09.603 "timeout_sec": 30 00:33:09.603 } 00:33:09.603 }, 00:33:09.603 { 00:33:09.603 "method": "bdev_nvme_set_options", 00:33:09.603 "params": { 00:33:09.603 "action_on_timeout": "none", 00:33:09.603 "timeout_us": 0, 00:33:09.603 "timeout_admin_us": 0, 00:33:09.603 "keep_alive_timeout_ms": 10000, 00:33:09.603 "arbitration_burst": 0, 00:33:09.603 "low_priority_weight": 0, 00:33:09.603 "medium_priority_weight": 0, 00:33:09.603 "high_priority_weight": 0, 00:33:09.603 "nvme_adminq_poll_period_us": 10000, 00:33:09.603 "nvme_ioq_poll_period_us": 0, 00:33:09.603 "io_queue_requests": 512, 00:33:09.603 "delay_cmd_submit": true, 00:33:09.603 "transport_retry_count": 4, 00:33:09.603 "bdev_retry_count": 3, 00:33:09.603 "transport_ack_timeout": 0, 00:33:09.603 "ctrlr_loss_timeout_sec": 0, 00:33:09.603 "reconnect_delay_sec": 0, 00:33:09.603 "fast_io_fail_timeout_sec": 0, 00:33:09.603 "disable_auto_failback": false, 00:33:09.603 "generate_uuids": false, 00:33:09.603 "transport_tos": 0, 00:33:09.603 "nvme_error_stat": false, 00:33:09.603 "rdma_srq_size": 0, 00:33:09.603 "io_path_stat": false, 00:33:09.603 "allow_accel_sequence": false, 00:33:09.603 "rdma_max_cq_size": 0, 00:33:09.603 "rdma_cm_event_timeout_ms": 0, 00:33:09.603 "dhchap_digests": [ 00:33:09.603 "sha256", 00:33:09.603 "sha384", 00:33:09.603 "sha512" 00:33:09.603 ], 00:33:09.603 "dhchap_dhgroups": [ 00:33:09.603 "null", 00:33:09.603 "ffdhe2048", 00:33:09.603 "ffdhe3072", 00:33:09.603 "ffdhe4096", 00:33:09.603 "ffdhe6144", 00:33:09.603 "ffdhe8192" 00:33:09.603 ] 00:33:09.603 } 00:33:09.603 }, 00:33:09.603 { 00:33:09.603 "method": "bdev_nvme_attach_controller", 00:33:09.603 "params": { 00:33:09.603 "name": "nvme0", 00:33:09.603 "trtype": "TCP", 00:33:09.603 "adrfam": "IPv4", 00:33:09.603 "traddr": "127.0.0.1", 00:33:09.603 "trsvcid": "4420", 00:33:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.603 "prchk_reftag": false, 00:33:09.603 "prchk_guard": false, 00:33:09.603 "ctrlr_loss_timeout_sec": 0, 00:33:09.603 "reconnect_delay_sec": 0, 00:33:09.603 "fast_io_fail_timeout_sec": 0, 00:33:09.603 "psk": "key0", 00:33:09.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.603 "hdgst": false, 00:33:09.603 "ddgst": false 00:33:09.603 } 00:33:09.603 }, 00:33:09.603 { 00:33:09.603 "method": "bdev_nvme_set_hotplug", 00:33:09.603 "params": { 00:33:09.603 "period_us": 100000, 00:33:09.603 "enable": false 00:33:09.603 } 00:33:09.603 }, 00:33:09.603 { 00:33:09.603 "method": "bdev_wait_for_examine" 00:33:09.603 } 00:33:09.603 ] 00:33:09.603 }, 00:33:09.603 { 00:33:09.603 "subsystem": "nbd", 00:33:09.603 "config": [] 00:33:09.603 } 00:33:09.603 ] 00:33:09.603 }' 00:33:09.603 [2024-07-16 00:44:23.216007] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:33:09.603 [2024-07-16 00:44:23.216075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342746 ] 00:33:09.863 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.863 [2024-07-16 00:44:23.285261] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.863 [2024-07-16 00:44:23.338950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.863 [2024-07-16 00:44:23.480932] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:10.432 00:44:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:10.432 00:44:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:10.432 00:44:23 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:10.432 00:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.432 00:44:23 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:10.691 00:44:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:10.691 00:44:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:10.691 00:44:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:10.691 00:44:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:10.691 00:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.951 00:44:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:10.951 00:44:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:10.951 00:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:10.951 00:44:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:11.212 00:44:24 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:11.212 00:44:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:11.212 00:44:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sM0DjOo9sx /tmp/tmp.yJ0wWHylBS 00:33:11.212 00:44:24 keyring_file -- keyring/file.sh@20 -- # killprocess 1342746 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1342746 ']' 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1342746 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1342746 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1342746' 00:33:11.212 killing process with pid 1342746 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@967 -- # kill 1342746 00:33:11.212 Received shutdown signal, test time was about 1.000000 seconds 00:33:11.212 00:33:11.212 Latency(us) 00:33:11.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.212 =================================================================================================================== 00:33:11.212 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@972 -- # wait 1342746 00:33:11.212 00:44:24 keyring_file -- keyring/file.sh@21 -- # killprocess 1340980 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1340980 ']' 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1340980 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1340980 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1340980' 00:33:11.212 killing process with pid 1340980 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@967 -- # kill 1340980 00:33:11.212 [2024-07-16 00:44:24.813285] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:11.212 00:44:24 keyring_file -- common/autotest_common.sh@972 -- # wait 1340980 00:33:11.473 00:33:11.473 real 0m10.982s 00:33:11.473 user 0m25.732s 00:33:11.473 sys 0m2.646s 00:33:11.473 00:44:25 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:11.473 00:44:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:11.473 ************************************ 00:33:11.473 END TEST keyring_file 00:33:11.473 ************************************ 00:33:11.473 00:44:25 -- common/autotest_common.sh@1142 -- # return 0 00:33:11.473 00:44:25 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:11.473 00:44:25 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:11.473 00:44:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:11.473 00:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:11.473 00:44:25 -- common/autotest_common.sh@10 -- # set +x 00:33:11.734 ************************************ 00:33:11.734 START TEST keyring_linux 00:33:11.734 ************************************ 00:33:11.734 00:44:25 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:11.734 * Looking for test storage... 00:33:11.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:11.734 00:44:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:11.734 00:44:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.734 00:44:25 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.734 00:44:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.735 00:44:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.735 00:44:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.735 00:44:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.735 00:44:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.735 00:44:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.735 00:44:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:11.735 00:44:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:11.735 /tmp/:spdk-test:key0 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:11.735 00:44:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:11.735 00:44:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:11.735 /tmp/:spdk-test:key1 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1343348 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1343348 00:33:11.735 00:44:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1343348 ']' 00:33:11.735 00:44:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.735 00:44:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.735 00:44:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.735 00:44:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.735 00:44:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:11.735 00:44:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:11.997 [2024-07-16 00:44:25.376910] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:33:11.997 [2024-07-16 00:44:25.376974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343348 ] 00:33:11.997 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.997 [2024-07-16 00:44:25.444462] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.997 [2024-07-16 00:44:25.512697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:12.569 00:44:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:12.569 [2024-07-16 00:44:26.106104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.569 null0 00:33:12.569 [2024-07-16 00:44:26.138151] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:12.569 [2024-07-16 00:44:26.138543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.569 00:44:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:12.569 357551591 00:33:12.569 00:44:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:12.569 1022162025 00:33:12.569 00:44:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1343399 00:33:12.569 00:44:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1343399 /var/tmp/bperf.sock 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1343399 ']' 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:12.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:12.569 00:44:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:12.569 00:44:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:12.829 [2024-07-16 00:44:26.219593] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:33:12.829 [2024-07-16 00:44:26.219642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343399 ] 00:33:12.829 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.829 [2024-07-16 00:44:26.297353] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.829 [2024-07-16 00:44:26.350949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.402 00:44:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.402 00:44:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:13.402 00:44:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:13.402 00:44:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:13.663 00:44:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:13.663 00:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:13.924 00:44:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:13.924 00:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:13.924 [2024-07-16 00:44:27.449890] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:13.924 nvme0n1 00:33:13.924 00:44:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:13.924 00:44:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:13.924 00:44:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:13.924 00:44:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:13.924 00:44:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:13.924 00:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.185 00:44:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:14.186 00:44:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:14.186 00:44:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:14.186 00:44:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:14.186 00:44:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:14.186 00:44:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:14.186 00:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.447 00:44:27 keyring_linux -- keyring/linux.sh@25 -- # sn=357551591 00:33:14.447 00:44:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:14.447 00:44:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:14.447 00:44:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 357551591 == \3\5\7\5\5\1\5\9\1 ]] 00:33:14.447 00:44:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 357551591 00:33:14.447 00:44:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:14.447 00:44:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:14.447 Running I/O for 1 seconds... 00:33:15.389 00:33:15.389 Latency(us) 00:33:15.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.389 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:15.390 nvme0n1 : 1.01 9696.18 37.88 0.00 0.00 13111.03 8574.29 19333.12 00:33:15.390 =================================================================================================================== 00:33:15.390 Total : 9696.18 37.88 0.00 0.00 13111.03 8574.29 19333.12 00:33:15.390 0 00:33:15.390 00:44:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:15.390 00:44:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:15.651 00:44:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:15.651 00:44:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:15.651 00:44:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:15.651 00:44:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:15.651 00:44:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:15.651 00:44:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.912 00:44:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:15.912 00:44:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:15.912 00:44:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:15.912 00:44:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:15.912 00:44:29 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:15.912 00:44:29 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:15.912 00:44:29 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:15.912 00:44:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:15.912 00:44:29 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:15.912 00:44:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:15.912 00:44:29 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:15.912 00:44:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:15.912 [2024-07-16 00:44:29.429966] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:15.912 [2024-07-16 00:44:29.430797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18990b0 (107): Transport endpoint is not connected 00:33:15.912 [2024-07-16 00:44:29.431793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18990b0 (9): Bad file descriptor 00:33:15.912 [2024-07-16 00:44:29.432795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:15.912 [2024-07-16 00:44:29.432801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:15.912 [2024-07-16 00:44:29.432807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:15.912 request: 00:33:15.913 { 00:33:15.913 "name": "nvme0", 00:33:15.913 "trtype": "tcp", 00:33:15.913 "traddr": "127.0.0.1", 00:33:15.913 "adrfam": "ipv4", 00:33:15.913 "trsvcid": "4420", 00:33:15.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.913 "prchk_reftag": false, 00:33:15.913 "prchk_guard": false, 00:33:15.913 "hdgst": false, 00:33:15.913 "ddgst": false, 00:33:15.913 "psk": ":spdk-test:key1", 00:33:15.913 "method": "bdev_nvme_attach_controller", 00:33:15.913 "req_id": 1 00:33:15.913 } 00:33:15.913 Got JSON-RPC error response 00:33:15.913 response: 00:33:15.913 { 00:33:15.913 "code": -5, 00:33:15.913 "message": "Input/output error" 00:33:15.913 } 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@33 -- # sn=357551591 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 357551591 00:33:15.913 1 links removed 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@33 -- # sn=1022162025 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1022162025 00:33:15.913 1 links removed 00:33:15.913 00:44:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1343399 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1343399 ']' 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1343399 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1343399 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1343399' 00:33:15.913 killing process with pid 1343399 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 1343399 00:33:15.913 Received shutdown signal, test time was about 1.000000 seconds 00:33:15.913 00:33:15.913 Latency(us) 00:33:15.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.913 =================================================================================================================== 00:33:15.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:15.913 00:44:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 1343399 00:33:16.174 00:44:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1343348 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1343348 ']' 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1343348 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1343348 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1343348' 00:33:16.174 killing process with pid 1343348 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 1343348 00:33:16.174 00:44:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 1343348 00:33:16.435 00:33:16.435 real 0m4.782s 00:33:16.435 user 0m8.251s 00:33:16.435 sys 0m1.412s 00:33:16.435 00:44:29 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:16.435 00:44:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:16.435 ************************************ 00:33:16.435 END TEST keyring_linux 00:33:16.435 ************************************ 00:33:16.435 00:44:29 -- common/autotest_common.sh@1142 -- # return 0 00:33:16.435 00:44:29 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:16.435 00:44:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:16.436 00:44:29 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:16.436 00:44:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:16.436 00:44:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:16.436 00:44:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:16.436 00:44:29 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:16.436 00:44:29 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:16.436 00:44:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:16.436 00:44:29 -- common/autotest_common.sh@10 -- # set +x 00:33:16.436 00:44:29 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:16.436 00:44:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:16.436 00:44:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:16.436 00:44:29 -- common/autotest_common.sh@10 -- # set +x 00:33:24.576 INFO: APP EXITING 00:33:24.576 INFO: killing all VMs 00:33:24.576 INFO: killing vhost app 00:33:24.576 INFO: EXIT DONE 00:33:27.874 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:27.875 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:27.875 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:32.077 Cleaning 00:33:32.077 Removing: /var/run/dpdk/spdk0/config 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:32.077 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:32.077 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:32.077 Removing: /var/run/dpdk/spdk1/config 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:32.077 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:32.077 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:32.077 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:32.077 Removing: /var/run/dpdk/spdk2/config 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:32.077 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:32.077 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:32.077 Removing: /var/run/dpdk/spdk3/config 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:32.077 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:32.077 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:32.077 Removing: /var/run/dpdk/spdk4/config 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:32.077 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:32.077 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:32.077 Removing: /dev/shm/bdev_svc_trace.1 00:33:32.077 Removing: /dev/shm/nvmf_trace.0 00:33:32.077 Removing: /dev/shm/spdk_tgt_trace.pid858091 00:33:32.077 Removing: /var/run/dpdk/spdk0 00:33:32.077 Removing: /var/run/dpdk/spdk1 00:33:32.077 Removing: /var/run/dpdk/spdk2 00:33:32.077 Removing: /var/run/dpdk/spdk3 00:33:32.077 Removing: /var/run/dpdk/spdk4 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1000122 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1000169 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1000463 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1001894 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1003286 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1013888 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1014335 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1019827 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1027195 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1030250 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1043837 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1056009 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1058014 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1059040 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1081021 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1086064 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1117421 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1123157 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1125152 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1127388 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1127513 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1127859 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1128103 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1128682 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1130927 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1132004 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1132559 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1135085 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1135797 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1136554 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1142640 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1155747 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1160562 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1168303 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1169807 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1171473 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1177236 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1182624 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1192732 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1192740 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1198269 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1198584 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1198912 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1199281 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1199418 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1205754 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1206534 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1212261 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1215464 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1222514 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1229408 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1239979 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1249094 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1249096 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1273447 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1274243 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1274982 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1275672 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1276723 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1277429 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1278111 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1278793 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1284437 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1284645 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1292237 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1292614 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1295126 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1302913 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1302918 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1310132 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1312355 00:33:32.077 Removing: /var/run/dpdk/spdk_pid1314845 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1316051 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1318556 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1319896 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1330622 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1331290 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1331952 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1335003 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1335431 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1336030 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1340980 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1341150 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1342746 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1343348 00:33:32.337 Removing: /var/run/dpdk/spdk_pid1343399 00:33:32.337 Removing: /var/run/dpdk/spdk_pid856448 00:33:32.337 Removing: /var/run/dpdk/spdk_pid858091 00:33:32.337 Removing: /var/run/dpdk/spdk_pid858641 00:33:32.337 Removing: /var/run/dpdk/spdk_pid859708 00:33:32.337 Removing: /var/run/dpdk/spdk_pid860020 00:33:32.337 Removing: /var/run/dpdk/spdk_pid861173 00:33:32.337 Removing: /var/run/dpdk/spdk_pid861414 00:33:32.337 Removing: /var/run/dpdk/spdk_pid861693 00:33:32.337 Removing: /var/run/dpdk/spdk_pid862666 00:33:32.337 Removing: /var/run/dpdk/spdk_pid863438 00:33:32.337 Removing: /var/run/dpdk/spdk_pid863742 00:33:32.337 Removing: /var/run/dpdk/spdk_pid864005 00:33:32.337 Removing: /var/run/dpdk/spdk_pid864311 00:33:32.337 Removing: /var/run/dpdk/spdk_pid864688 00:33:32.337 Removing: /var/run/dpdk/spdk_pid865041 00:33:32.337 Removing: /var/run/dpdk/spdk_pid865343 00:33:32.337 Removing: /var/run/dpdk/spdk_pid865576 00:33:32.337 Removing: /var/run/dpdk/spdk_pid866844 00:33:32.337 Removing: /var/run/dpdk/spdk_pid870101 00:33:32.337 Removing: /var/run/dpdk/spdk_pid870465 00:33:32.337 Removing: /var/run/dpdk/spdk_pid870831 00:33:32.337 Removing: /var/run/dpdk/spdk_pid871011 00:33:32.337 Removing: /var/run/dpdk/spdk_pid871538 00:33:32.337 Removing: /var/run/dpdk/spdk_pid871556 00:33:32.337 Removing: /var/run/dpdk/spdk_pid872090 00:33:32.337 Removing: /var/run/dpdk/spdk_pid872256 00:33:32.337 Removing: /var/run/dpdk/spdk_pid872607 00:33:32.337 Removing: /var/run/dpdk/spdk_pid872631 00:33:32.337 Removing: /var/run/dpdk/spdk_pid872992 00:33:32.337 Removing: /var/run/dpdk/spdk_pid873008 00:33:32.337 Removing: /var/run/dpdk/spdk_pid873516 00:33:32.337 Removing: /var/run/dpdk/spdk_pid873799 00:33:32.337 Removing: /var/run/dpdk/spdk_pid874194 00:33:32.337 Removing: /var/run/dpdk/spdk_pid874563 00:33:32.337 Removing: /var/run/dpdk/spdk_pid874584 00:33:32.337 Removing: /var/run/dpdk/spdk_pid874653 00:33:32.337 Removing: /var/run/dpdk/spdk_pid875006 00:33:32.337 Removing: /var/run/dpdk/spdk_pid875361 00:33:32.337 Removing: /var/run/dpdk/spdk_pid875706 00:33:32.337 Removing: /var/run/dpdk/spdk_pid875895 00:33:32.337 Removing: /var/run/dpdk/spdk_pid876109 00:33:32.337 Removing: /var/run/dpdk/spdk_pid876447 00:33:32.337 Removing: /var/run/dpdk/spdk_pid876796 00:33:32.337 Removing: /var/run/dpdk/spdk_pid877148 00:33:32.337 Removing: /var/run/dpdk/spdk_pid877369 00:33:32.337 Removing: /var/run/dpdk/spdk_pid877557 00:33:32.337 Removing: /var/run/dpdk/spdk_pid877886 00:33:32.337 Removing: /var/run/dpdk/spdk_pid878240 00:33:32.337 Removing: /var/run/dpdk/spdk_pid878589 00:33:32.598 Removing: /var/run/dpdk/spdk_pid878856 00:33:32.598 Removing: /var/run/dpdk/spdk_pid879040 00:33:32.598 Removing: /var/run/dpdk/spdk_pid879330 00:33:32.598 Removing: /var/run/dpdk/spdk_pid879686 00:33:32.598 Removing: /var/run/dpdk/spdk_pid880038 00:33:32.598 Removing: /var/run/dpdk/spdk_pid880399 00:33:32.598 Removing: /var/run/dpdk/spdk_pid880687 00:33:32.598 Removing: /var/run/dpdk/spdk_pid880914 00:33:32.598 Removing: /var/run/dpdk/spdk_pid881325 00:33:32.598 Removing: /var/run/dpdk/spdk_pid886588 00:33:32.598 Removing: /var/run/dpdk/spdk_pid945029 00:33:32.598 Removing: /var/run/dpdk/spdk_pid950589 00:33:32.598 Removing: /var/run/dpdk/spdk_pid962979 00:33:32.598 Removing: /var/run/dpdk/spdk_pid970027 00:33:32.598 Removing: /var/run/dpdk/spdk_pid975395 00:33:32.598 Removing: /var/run/dpdk/spdk_pid976078 00:33:32.598 Removing: /var/run/dpdk/spdk_pid983938 00:33:32.598 Removing: /var/run/dpdk/spdk_pid991542 00:33:32.598 Removing: /var/run/dpdk/spdk_pid991635 00:33:32.598 Removing: /var/run/dpdk/spdk_pid992664 00:33:32.598 Removing: /var/run/dpdk/spdk_pid993730 00:33:32.598 Removing: /var/run/dpdk/spdk_pid994859 00:33:32.598 Removing: /var/run/dpdk/spdk_pid995702 00:33:32.598 Removing: /var/run/dpdk/spdk_pid995876 00:33:32.598 Removing: /var/run/dpdk/spdk_pid996281 00:33:32.598 Removing: /var/run/dpdk/spdk_pid996421 00:33:32.598 Removing: /var/run/dpdk/spdk_pid996432 00:33:32.598 Removing: /var/run/dpdk/spdk_pid997437 00:33:32.598 Removing: /var/run/dpdk/spdk_pid998439 00:33:32.598 Removing: /var/run/dpdk/spdk_pid999446 00:33:32.598 Clean 00:33:32.598 00:44:46 -- common/autotest_common.sh@1451 -- # return 0 00:33:32.598 00:44:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:32.598 00:44:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:32.598 00:44:46 -- common/autotest_common.sh@10 -- # set +x 00:33:32.598 00:44:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:32.598 00:44:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:32.598 00:44:46 -- common/autotest_common.sh@10 -- # set +x 00:33:32.860 00:44:46 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:32.860 00:44:46 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:32.860 00:44:46 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:32.860 00:44:46 -- spdk/autotest.sh@391 -- # hash lcov 00:33:32.860 00:44:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:32.860 00:44:46 -- spdk/autotest.sh@393 -- # hostname 00:33:32.860 00:44:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:32.860 geninfo: WARNING: invalid characters removed from testname! 00:33:59.441 00:45:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:00.009 00:45:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:01.389 00:45:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:03.298 00:45:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:04.681 00:45:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:06.642 00:45:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:08.073 00:45:21 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:08.073 00:45:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.073 00:45:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:08.073 00:45:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.073 00:45:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.073 00:45:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.073 00:45:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.073 00:45:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.073 00:45:21 -- paths/export.sh@5 -- $ export PATH 00:34:08.073 00:45:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.073 00:45:21 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:08.073 00:45:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:08.073 00:45:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721083521.XXXXXX 00:34:08.073 00:45:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721083521.ws1IPU 00:34:08.073 00:45:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:08.073 00:45:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:08.073 00:45:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:08.073 00:45:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:08.073 00:45:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:08.073 00:45:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:08.073 00:45:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:08.073 00:45:21 -- common/autotest_common.sh@10 -- $ set +x 00:34:08.073 00:45:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:08.073 00:45:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:08.073 00:45:21 -- pm/common@17 -- $ local monitor 00:34:08.073 00:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.073 00:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.073 00:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.073 00:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:08.073 00:45:21 -- pm/common@21 -- $ date +%s 00:34:08.073 00:45:21 -- pm/common@25 -- $ sleep 1 00:34:08.073 00:45:21 -- pm/common@21 -- $ date +%s 00:34:08.073 00:45:21 -- pm/common@21 -- $ date +%s 00:34:08.073 00:45:21 -- pm/common@21 -- $ date +%s 00:34:08.073 00:45:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083521 00:34:08.073 00:45:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083521 00:34:08.073 00:45:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083521 00:34:08.073 00:45:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083521 00:34:08.073 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083521_collect-vmstat.pm.log 00:34:08.073 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083521_collect-cpu-load.pm.log 00:34:08.073 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083521_collect-cpu-temp.pm.log 00:34:08.073 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083521_collect-bmc-pm.bmc.pm.log 00:34:09.016 00:45:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:09.016 00:45:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:09.016 00:45:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:09.016 00:45:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:09.016 00:45:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:09.016 00:45:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:09.016 00:45:22 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:09.016 00:45:22 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:09.016 00:45:22 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:09.016 00:45:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:09.016 00:45:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:09.016 00:45:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:09.016 00:45:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:09.016 00:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:09.016 00:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:09.016 00:45:22 -- pm/common@44 -- $ pid=1356722 00:34:09.016 00:45:22 -- pm/common@50 -- $ kill -TERM 1356722 00:34:09.016 00:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:09.016 00:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:09.016 00:45:22 -- pm/common@44 -- $ pid=1356723 00:34:09.016 00:45:22 -- pm/common@50 -- $ kill -TERM 1356723 00:34:09.016 00:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:09.016 00:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:09.016 00:45:22 -- pm/common@44 -- $ pid=1356725 00:34:09.016 00:45:22 -- pm/common@50 -- $ kill -TERM 1356725 00:34:09.016 00:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:09.016 00:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:09.016 00:45:22 -- pm/common@44 -- $ pid=1356748 00:34:09.016 00:45:22 -- pm/common@50 -- $ sudo -E kill -TERM 1356748 00:34:09.016 + [[ -n 732328 ]] 00:34:09.016 + sudo kill 732328 00:34:09.028 [Pipeline] } 00:34:09.046 [Pipeline] // stage 00:34:09.051 [Pipeline] } 00:34:09.064 [Pipeline] // timeout 00:34:09.069 [Pipeline] } 00:34:09.086 [Pipeline] // catchError 00:34:09.092 [Pipeline] } 00:34:09.110 [Pipeline] // wrap 00:34:09.116 [Pipeline] } 00:34:09.134 [Pipeline] // catchError 00:34:09.143 [Pipeline] stage 00:34:09.144 [Pipeline] { (Epilogue) 00:34:09.158 [Pipeline] catchError 00:34:09.160 [Pipeline] { 00:34:09.176 [Pipeline] echo 00:34:09.178 Cleanup processes 00:34:09.186 [Pipeline] sh 00:34:09.478 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:09.478 1356828 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:09.478 1357270 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:09.493 [Pipeline] sh 00:34:09.782 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:09.782 ++ grep -v 'sudo pgrep' 00:34:09.782 ++ awk '{print $1}' 00:34:09.782 + sudo kill -9 1356828 00:34:09.825 [Pipeline] sh 00:34:10.107 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:22.342 [Pipeline] sh 00:34:22.632 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:22.632 Artifacts sizes are good 00:34:22.648 [Pipeline] archiveArtifacts 00:34:22.656 Archiving artifacts 00:34:22.867 [Pipeline] sh 00:34:23.153 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:23.167 [Pipeline] cleanWs 00:34:23.177 [WS-CLEANUP] Deleting project workspace... 00:34:23.177 [WS-CLEANUP] Deferred wipeout is used... 00:34:23.185 [WS-CLEANUP] done 00:34:23.186 [Pipeline] } 00:34:23.198 [Pipeline] // catchError 00:34:23.267 [Pipeline] sh 00:34:23.550 + logger -p user.info -t JENKINS-CI 00:34:23.559 [Pipeline] } 00:34:23.577 [Pipeline] // stage 00:34:23.584 [Pipeline] } 00:34:23.600 [Pipeline] // node 00:34:23.607 [Pipeline] End of Pipeline 00:34:23.653 Finished: SUCCESS